I was just wondering, how is this hash selected?
You just observe the chain, and pick something, which is unlikely to change. For example: we have a rule, that coinbase transaction needs 100 confirmations to be moved. Which means, that if there would be bigger reorg than 100 blocks, then we would have some problems, because then, newly created coins can be used to make transactions, which could disappear after chain reorganization.
For example: at the moment of writing this message, the latest block hash is:
00000000000000000002128242740a658cddd02874729a1bff5a485130d84c5a.
And you can go something around 100 blocks backward, and say, that everything up to
00000000000000000001fbcc91af1dc346662bc536cb4d8713cd992f00794970 is valid. If it is not, then it would be in the news, if someone would find some bug, similar to Value Overflow Incident, in some earlier block (even during that incident, reorgs were not bigger than 100 blocks, but rather something like 53).
Do the developers choose manually a hash?
Yes.
why?
You can change one block to another, if you want. Or you can disable it entirely, and always check everything. It is just some assumption, which makes synchronization faster. If you checked a chain once, and you know, that for example everything up to the block 900k is valid, then you can trust yourself, that if the hash of this block didn't change since the last time you did it, then everything is still as valid as it was (unless you assume, that someone broke SHA-256 in the meanwhile, or did something similar).
But of course, it is up to you. In the same way, if you synchronized the chain once, then you can copy-paste files, and avoid synchronizing it again. The client fully trusts the database you pick, and if you checked it once, and didn't mess up anything, then you can just reuse it.
or is this dynamic?
It is adjusted manually every sometimes. Now, trusting the block 911250 seems to be ok, but if the chain will grow to one million blocks, then it would mean, that the last 88750 blocks will be checked, so by keeping the same hash, this optimization becomes worse over time, so every sometimes, it is manually updated, to some more recent block header. And by checking that hash, you can roughly estimate, when it was updated, and do you need to bump it, or not.
If so what criteria is followed to select this hash?
Nothing more than just performance, and likelihood of seeing some block reorganization. If you need to validate everything, then set it to the Genesis Block, or disable it entirely. And if you synchronized the chain recently, and you know, that everything up to a given block is correct, then you can set it, to avoid checking everything yet again, if you don't want to.
Im just hoping 2TB is able to host the entire blockchain for some years.
For secp256k1 with its 4 MB witness limit, it should be acceptable. But for some quantum experiments, I think they may consume much more than that. But for now, it should be sufficient, I guess, as long as you don't have to deal with quantum signatures.