dunno what half the technology you're talking about is, probably cuz I don't run a pool
but it sounds like you're saying the pools have their own independent difficulty adjuster to keep the share rate pretty static and the only problem is a new pool starting at 1 instead of like 100,000. Sounds stable enough. Do all pools use such load balancing measures?
Btw yeah, I would assume all pools have to verify a submitted share so it can accept it or reject it as a correct answer. So if they send out a block calculation with say 1000x easier difficulty than the real block but with the same base data, once someone's mining client does 10 billion calculations and finds a sufficiently low hash, the pool server won't just take their word for it. It has to re-run that 1 single hash to verify that the result is low enough. So if my GPU does 10 billion hashes in like 1 minute or whatever and then I just have to have the server verify the one that I said was correct. A Pentium 2 could do that in 1 ms
but the problem is, what if it's 1 server and there's a a million shares coming in at a time? Then you're back up to somewhat big numbers. 1 MH/s = 1 million verified shares and any server chip can do that but this isn't a bitstream. It's 1 million individual requests coming over the internet that the NIC has to translate, load into memory, etc so you're not quite going to see the same performance as a desktop running the operation solo in a repetitive fashion.
But, if the people above me are referring to an automatic difficulty adjustment for the shares themselves within a pool, I would assume you can just set it at a target of 1000 shares per minute or something and it will 10x the difficulty if it has to to keep the shares lower.