You just start up a node and it bootstraps and specializes without any user intervention at all. This is something that other distributed storage systems, like Tahoe-LAFS, don't have.
Sure, and nothing interesting or fancy is required for that to work. Our blockchain space is _well defined_, not some effectively infinite sparse state space. The access patterns to it are also well defined: All historical data is access with equal/flat small probability, and accessed sequentially. Recent blocks are accessed with an an approximately exponential decay. Data needed to validate a new block or transaction is always available from the party that gave you that block or transaction.
So, a very nice load-balancing architecture falls right out of that. Everyone keeps recent blocks with a exponentially distributed window size. Everyone selects a uniform random hunk of the history, size determined by their contributed storage and available bandwidth. This should result in nearly optimal traffic distribution and is highly attack resistant in a way seriously stronger than freenet's node swapping and without the big bandwidth overheads of having to route traffic through many homes to pick up data thats ended up far from its correct specialization as IDs have drifted.
Nielsen's Law of Internet Bandwidth suggests that high end home broadband users will have 10 gbit/sec connections by 2025. Does it not make sense plan ahead?
Arguing "Does it not make sense to plan ahead" here sounds like some kind of cargo cult engineering: "Planing ahead must be done. This is a plan. Then it must be done."
Any proposed actions need to be connected to solving actual problems (or at least ones that are reasonably and justifiably anticipated). What you're suggesting— to the extent that its even concrete enough to talk about the benefits or costs—, would likely _decrease_ the scaling over the current and/or most obvious designs by at least constant factor, and more probably a constant plus a logarithmic factor. Worse, it would move costs from storage, which appears to have the best scaling 'law', to bandwidth which has the worst empirical scaling.
If you scale things based on the scaling laws your assuming there nothing further is required. If you strap on all the nice and pretty empirically observed exponential trends then everything all gets faster and everything automatically scales up no worse than the most limiting scaling factor (which has been bandwidth historically and looks like it will continue to be)— assuming no worse than linear performance. There are already no worse than linear behavior in the Bitcoin protocol that I'm aware of. Any in the implementations are just that, and can be happily hammered out asynchronously over time. Given computers and bandwidth that are ~10e6 better (upto a factor of 4 or so in either direction), you can have your 10e6 transactions/s. Now— I'm skeptical that these exponential technology trends will hold even just in my lifetime. But assuming they don't, then that results in a ceiling in what you can do in a decentralized system that twiddling around the protocols can't save without tossing the security model/decentralization.
Maybe people will want to toss the decentralization of Bitcoin in order to scale it further than the technology supports. If so, I would find that regrettable, since if you want to do that you could just do it in an external system. But until I'm the boss of everything I suspect some people will continue to do things I find regrettable from time to time— I don't, however, see much point in participating in discussions about those things, since I certainly won't be participating in them.