What is quite unexplainable is why Gavin is proposing such an abrupt change: going directly to 20MB blocks. It would be much more rational to have a "slow" approach, ie increasing the block size to 2MB, then to 4MB, then to 8MB and so forth. I'm 100% sure consensus will be much easier to reach in that scenario. 2MB will double the current capacity with no difference at all in overhead for miners and full nodes.
Why in the earth is Gavin proposing such a dramatic change which could have deep implications instead of taking a cautious, slow and progressive approach?
You could argue that. You could also argue that we don't want to keep bandaiding this over and over.
If you're gonna use a blocksize increase as a bandaid, do it once so that its big enough we
have plenty of time to find a better solution before asking for another fork.
Please go read the bitcoin-dev mailing list. Most technical guys are against such an abrupt change because it can lead to centralization pretty quickly. The conflict of interest thing is pure BS. Many pools, especially the chinese ones which have difficulty to access 100Mbps connections behind the great firwall are very worried about their orphan rate skyrocketing and thus are against such an abrupt change, it's all on the bitcoin-dev mailing for everybody to see.
The fact is that there are many questions left unanswered, there's a very real centralization threat, but still most agree that they would support a more gradual change, even to 5MB or 8MB. That is x5 and x8 the current capacity, why in the hell are Gavin & Hearn (who has a track record of pushing for many very nefarious features,
see blacklisting for example) pushing for a x20 increase in block size? Why not going for a gradual approach? The "we don't want bandaiding this over and over" is pure BS. I think the risk of centralization is too high to be reckless, it is much better to bandaid a couple of times than to just kill bitcoin by making it centralized.