One of the intellectual fathers of Bit coin, Luke Jr, sees that blocks are full of mostly spam, they could be cut in half (0.5MB) to better serve the purpose of uncensorable monies. Poors and spammers may use doge.
The only problem is this decission comes from artifical block size limit which is centralized regulation decision about the max block size.
Satoshi was a central planner, setting the 21 million coin limit with controlled inflation. Engineering
is centrally planned. I agree that we would probably be better off if the limit had been set at 0.5MB, but since such a consensus change would obviously be contentious, no one has ever actually bothered writing the code. Classic supporters, on the other hand, are willing to force their changes on people like me, without any real metric to even measure user support. (No, hash power does not say anything about my willingness to support bigger blocks as a user and node operator)
If this value comes from free market, ie no miner wanting to create block sizes over 0.5MB (or whatever size), then this would be fine.
How so? Different miners can set different soft limits. But without a defined hard limit, Miner A can publish a block that violates Miner B's soft limit. Miner B will reject the block. Consider that Miner A has a majority of hash power. Fairly soon then, Miner A's chain will be longer. But since Miner B rejected one of Miner A's oversized blocks (which is still being mined on top of), both miners will be working on two disparate chains. In this way, the network(s) can continue to fork endlessly since there is no agreement on rule enforcement.
"Longest" doesn't mean anything on its own. The only thing that matters is the "longest valid chain." Validity is determined by consensus rules. Even if there is no consensus on a block size limit, that doesn't stop anyone from rejecting blocks on that basis. If one miner accepts that block and builds on it, and another miner rejects it, "longest" chain loses any meaning. The latter miner will never accept the longer invalid chain.
There is not consensus on block size limit today. With Bitcoin Unlimited many people would accept over 1 MB blocks even today.
There
is consensus because the network enforces a 1MB limit. That's a consensus rule. Even Classic is enforcing it.
Bitcoin Unlimited would cause a new chain fork to emerge every time a block was published that disagreed with one network's consensus rules, but not its own. If Node A enforces 1MB, Node B enforces 2MB, Node C enforces 8MB and Node D (Unlimited) enforces no limit, this results in at least 3 chain forks--with a new fork for every new limit that emerges that is higher than the last. What possible good can come from that? "Freedom of block size?" Well it results in chain forks and no more single global ledger called "Bitcoin."
The only reason miners dont mine over 1 MB blocks today is because they act rationally, most nodes (thus Bitcoin services) would reject such block and the chance to keep longest chain with mined over 1 MB block is zero today.
And how much of their rationale stems from the fact that they know the network is enforcing a 1MB hard limit?
If there is no limit, how do they agree on what limit (if any) to enforce? How do they know what network nodes will be willing to relay (hence what users are willing to accept)? Any disagreement can easily result in a chain fork.
The market suggests that people are
rational, but it certainly doesn't suggest that
rational actors must be correct. If they are incorrect, in this case, we will chain fork into multiple networks.
BTW Bitcoin Unlimited is very underestimated client, with good features and development. Definitively worth considering when your undecided whether run Core or Classic.
IIRC even BU developers have admitted that their code is not sufficiently regression tested. Hardly any peer review and no reason to assume the code is thoroughly audited by experts in network security, cryptography, qualified systems/database engineers. Thin blocks is a cool idea, but unfortunately users report that there are widespread data gaps, and technical criticisms say its relay savings can't compete with the relay network, so it's not an adequate replacement for it.