People who are concerned about "blockchain bloat" really don't understand the technical details of bitcoin. They try to run Bitcoin Core without pruning and then complain that they've run out of disk space. They complain that it won't be possible for enough people to run a "FULL" node.
The fact that pruning exists at all is an admission that block chain size matters. We do indeed complain that it is not possible to run a full node on average systems. Pruning doesn't solve the problem, it just kicks it down the road.
Note that they are at least partly correct. As I mentioned, you'll always need some nodes to store the entire history.
You need
the network to store the entire history - preferably with redundancy. It is not necessary for all nodes to keep all blocks. The current system is the trivial solution, not the only solution.
The bigger issue with larger block sizes isn't the size of the blockchain, it's the size of the individual blocks themselves. Before a miner can reliably build a new block on top of a recently solved block in the chain, they need to verify that the recently solved block is valid. This means they have to take the time to download the entire block, and verify the block header and every transaction in the block.
Oh. Boo hoo. Those poor miners starving on the streets might have to do a little more work to maximise their profits. The difficulty will adjust to any more work required or they will just throw more hardware at it. This is a non-argument since they are just doing what miners do. Maybe more importantly, the miners will have to switch from maximising coin collection to maximising fees at some point anyway. This is a bait and switch from full nodes to miners (not the same thing at all) in an attempt to bolster the argument.
Those miners or pools with slower internet connections or slower processors will be at a significant disadvantage over those with extremely high speed connections and high end processors. This disadvantage could result in the higher end miners and pools getting a head start on the next block and solving more blocks than the disadvantaged miners and pools. Eventually the loss of revenue could result in mining power consolidating in just the few miners or pools with access to the fastest possible internet connections and newest processors.
Oh You mean those not in data-centers? This is the argument against centralisation for the current mining regime regardless of distribution mechanisms.
It also may become very difficult for those on very slow connections to download and verify a single block before the next block is solved. As such, even lightweight wallets and pruning nodes would continuously fall farther and farther behind on synchronization and never be able to catch up.
Complete speculation. Even an
Arduino could process a block in less than 10 minutes.
The unanswered question is how big is too big when it comes to blocks? Right now we are operating with 1 MB. Is 2 MB too big? How about 10 MB? 100 MB? 1 GB? 1 TB? Who gets to choose, and how do we get EVERYONE to agree on that choice?
No maximum limit. The trade-off between amount of fees (bigger blocks, more fees collected) and the competition to be first on the block chain (hashing efficiency and transmission) will yield an optimum block size depending on the miners' capabilities - no fixed block size! Of course. This destroys the fake scarcity introduced by the economists since all transactions can be catered for.