Bitcoin Forum
May 13, 2024, 10:58:20 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Is Mike Hearn's "Crash Landing" a reasonable prediction of events when the average block size approaches 1MB?
Yes, and overly full blocks are likely in 2015 - 6 (23.1%)
Yes, and overly full blocks are likely in 2016 - 7 (26.9%)
Yes, and overly full blocks are likely in 2017 - 1 (3.8%)
Yes, but not until 2018 or later - 0 (0%)
No, but a ROUGH transition when blocks are mostly full, fees rise and on-chain tx volume will plateau - 2 (7.7%)
No, and a SMOOTH transition when blocks are mostly full, fees rise and on-chain tx volume will plateau - 9 (34.6%)
Don't know or don't care or want a different option - 1 (3.8%)
Total Voters: 26

Pages: [1]
  Print  
Author Topic: Dev & Tech Opinion on Mike Hearn's "Crash Landing" scenario  (Read 1749 times)
solex (OP)
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
May 08, 2015, 07:19:35 AM
Last edit: May 08, 2015, 11:26:10 AM by solex
 #1

This sub-forum is intended for debate about technical matters concerning Bitcoin, specifically, for the purpose of improving it.
Bitcoin expert Mike Hearn has written an article "Crash Landing" where he describes what will happen if ecosystem growth continues and only normal maintenance and enhancement work is performed by developers. It paints a very serious picture, which is pretty much what I imagined when I learned about the max block size in January 2013.

This article was written in the context of vigorous debate about the block size. But what I want to do here is to take a step back and find out the opinion of the technically informed community on Bitcointalk.

I urge people to read it and then vote according to their considered opinion.
https://medium.com/@octskyward/crash-landing-f5cc19908e32

If there is a majority view that what is described is a problem, then maybe consensus can be achieved on mitigation of the risk of it coming to pass.

edit: adding the average block size chart, 2 years, 7-day smoothed,
https://blockchain.info/charts/avg-block-size?showDataPoints=false&show_header=true&daysAverageString=7&timespan=2year&scale=0&address=

1715641100
Hero Member
*
Offline Offline

Posts: 1715641100

View Profile Personal Message (Offline)

Ignore
1715641100
Reply with quote  #2

1715641100
Report to moderator
1715641100
Hero Member
*
Offline Offline

Posts: 1715641100

View Profile Personal Message (Offline)

Ignore
1715641100
Reply with quote  #2

1715641100
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715641100
Hero Member
*
Offline Offline

Posts: 1715641100

View Profile Personal Message (Offline)

Ignore
1715641100
Reply with quote  #2

1715641100
Report to moderator
1715641100
Hero Member
*
Offline Offline

Posts: 1715641100

View Profile Personal Message (Offline)

Ignore
1715641100
Reply with quote  #2

1715641100
Report to moderator
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8420



View Profile WWW
May 08, 2015, 08:05:38 AM
Last edit: May 08, 2015, 08:38:35 AM by gmaxwell
 #2

We've actually hit the soft limit before, consistently for long periods and did not see the negative effects described there (beyond confirmation times for lower fee transactions going up, of course).

If the frightful claims there are true then its arguably the Bitcoin is just doomed; after all-- there is no force on earth that constrain how much transactions people might seek to produce, any amount could be filled at any time.  And of course, there are limits-- even if you abandon decentralization as a goal entirely, computers are made out of matter and consume energy; they are finite, and Bitcoin's scale is not increased by having more users. There are single services today who do enough transactions internally to fill gigabyte blocks, if these transactions were dumped onto the chain--- so whatever the limit is, it's likely that a single entity could hit it, if it decided to do so. Fortunately, the fee market arbitrates access neutrally; but it does that at arbitrary scale.   Mike completely disregard this because he believes transactions should be free (which should be ringing bells for anyone thinking X MB blocks won't be constantly X MB; especially with all the companies being created to do things like file storage in the Bitcoin blockchain).

One of the mechanisms to make running against the the fee more tolerable which is simple to implement and easy for wallets to handle is replace-by-fee (in the boring, greater outputs mode; not talking about the scorched earth stuff)-- but thats something that Mike has vigorously opposed for some reason. Likewise CFPF potentially helps some situations too-- but it's mostly only had technical love by Luke and Matt.  It's perhaps no coincidence that all this work made progress in early 2013 when blocks were full, and has not had much of any attention since.

More than a few of the claims there are just bizarre, e.g.
Quote
Some Bitcoin Core developers believe that the reject message should be something only used for debugging and not something apps can rely upon. So there is no guarantee that a wallet would learn that its transaction didn’t fit.

The particular issue there is that the reject messages are fundamentally unhelpful for this (though they're a nice example of another railroaded feature, one that introduced a remotely exploitable vulnerability).  The issue is that just because node X accepted your transaction doesn't tell you that node Y, N hops away did or didn't, in particular it doesn't tell you if there is even a single miner anywhere in the network that rejected it-- what would you expect to avoid this? every node flooding a message for every transaction it rejects to every other node? (e.g. a rejection causing nodes^2 traffic???). Nodes due produce rejects today;  but it's not anything about anyone's opinion that prevents a guarantee there, the nature of a distributed/decenteralized system does. The whole notion of a reject being useful here is an artifact of erroneously trying to shoehorn in a model from centralized client/server systems into Bitcoin which is fundamentally unlike that.

Quote
I don’t know how fast this situation would play out, but as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable
The amount of transactions in memory is strictly limited by the number of outputs in the UTXO set size; as well as a rate limiter/priority for free transactions; there is technically an upper bound (though its not terribly relevant because its high and the limiter means it takes forever to reach it).  Of course, its trivial to keep the mempool constantly bounded but there has been little interest in completing that because the theoretical large size is not believed to be practically exploitable given the limits; -- there are patches though, though they're not liked those who think that never forgetting anything helps zero-conf security. (I don't generally, considering that there are much easier ways to defraud on zero conf than hoping some node forgets a very low priority zero conf transaction.)

The comments about the filling up memory stuff are grimly amusing to me in general for reasons I currently can't discuss in public (please feel free to ping in six months).

Overall, I think the article does a good job of suggesting that that the goal of the recent blocksize proposal is a far more wide spanning change than just increment the blocksize to make a necessary room, and that its also a move to make a substantial change to the original long term security model to an underspecified one which doesn't involve fees; a trajectory for an unlimited blocksize that processes any transactions that come in, at any cost; even if that means centralizing the whole network onto a single mega-node in order to accept the scale.  Or at least that appears to be the only move that has an clear answer the case of 'there will be doom if the users make too many transactions' (The answer being that the datacenter adds more storage containers full of equipment).


virtualx
Hero Member
*****
Offline Offline

Activity: 672
Merit: 507


LOTEO


View Profile
May 08, 2015, 08:41:30 AM
 #3

Quote
If the frightful claims there are true then its arguably the Bitcoin is just doomed; after all-- there is no force on earth that constrain how much transactions people might seek to produce, any amount could be filled at any time.  And of course, there are limits-- even if you abandon decentralization as a goal entirely, computers are made out of matter and consume energy; they are finite, and Bitcoin's scale is not increased by having more users. There are single services today who do enough transactions internally to fill gigabyte blocks, if these transactions were dumped onto the chain--- so whatever the limit is, it's likely that a single entity could hit it, if it decided to do so. Fortunately, the fee market arbitrates access neutrally; but it does that at arbitrary scale.   Mike completely disregard this because he believes transactions should be free (which should be ringing bells for anyone thinking X MB blocks won't be constantly X MB; especially with all the companies being created to do things like file storage in the Bitcoin blockchain).

That's correct but users are limited by the amount of bitcoins they posses. Multiple transactions while feasible add additional fee and sellers want one transaction in stores by default. I think free transactions on a bitcoin(like) network are not possible because of the cost involved.

Quote
even if that means centralizing the whole network onto a single mega-node in order to accept the scale.  Or at least that appears to be the only move that has an clear answer the case of 'there will be doom if the users make too many transactions' (The answer being that the datacenter adds more storage containers full of equipment).
Is scaling infeasible with a decentralized network? I do not think a centralized mega node is the solution to this problem.

...loteo...
DIGITAL ERA LOTTERY


r

▄▄███████████▄▄
▄███████████████████▄
▄███████████████████████▄
▄██████████████████████████▄
▄██  ███████▌ ▐██████████████▄
▐██▌ ▐█▀  ▀█    ▐█▀   ▀██▀  ▀██▌
▐██  █▌ █▌ ██  ██▌ ██▌ █▌ █▌ ██▌
▐█▌ ▐█ ▐█ ▐█▌ ▐██  ▄▄▄██ ▐█ ▐██▌
▐█  ██▄  ▄██    █▄    ██▄  ▄███▌
▀████████████████████████████▀
▀██████████████████████████▀
▀███████████████████████▀
▀███████████████████▀
▀▀███████████▀▀
r

RPLAY NOWR
BE A MOON VISITOR!
[/center]
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1097


View Profile
May 08, 2015, 09:29:23 AM
 #4

We've actually hit the soft limit before, consistently for long periods and did not see the negative effects described there (beyond confirmation times for lower fee transactions going up, of course).


Which period do you specifically talking about? And by soft limit do you mean 250kB?

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
solex (OP)
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
May 08, 2015, 12:15:51 PM
 #5

Thank you for your time to make a detailed response, as usual there is a lot to consider.
I want to add a few points.

We've actually hit the soft limit before, consistently for long periods and did not see the negative effects described there (beyond confirmation times for lower fee transactions going up, of course).
This is at odds with an earlier reply on this, which matched my recollection, especially regarding the 250KB soft-limit on March 6th, 2013:
Unfortunately over-eager increases of the soft-limit have denied us the opportunity to learn from experience under congestion and the motivation to create tools and optimize software to deal with congestion (fee-replacement, micropayment hubs, etc).
And this had no hope of being properly exercised as IIRC not all mining pools were on board, Eligius had a 500KB limit at the time, and a reasonable %age of the hashing power.

Look at the huge abundance of space wasting uncompressed keys (it requires ~ one line of code to compress a bitcoin pubkey) on the network to get an idea of how little pressure there exists to optimize use of the blockchain public-good right now.
Regarding efficiencies, your comment at the same time about compressed pubkeys deserves attention. Are you saying that new blocks in the blockchain could be easily made smaller with this compression? It seems a valuable benefit at this time.

Fortunately, the fee market arbitrates access neutrally; but it does that at arbitrary scale.   Mike completely disregard this because he believes transactions should be free (which should be ringing bells for anyone thinking X MB blocks won't be constantly X MB; especially with all the companies being created to do things like file storage in the Bitcoin blockchain).
Then perhaps the provision of free transactions should be reviewed in the light of vanishing block space. A simple change might be doubling the necessary days-destroyed per BTC.

One of the mechanisms to make running against the the fee more tolerable which is simple to implement and easy for wallets to handle is replace-by-fee (in the boring, greater outputs mode; not talking about the scorched earth stuff)-- but thats something that Mike has vigorously opposed for some reason.
If Jeff is ok with RBF in the boring mode then that would be a good improvement when blockspace is under pressure. But I know he is rightly exercised over the RBF-SE which is yet another ideological debate in itself.

The particular issue there is that the reject messages are fundamentally unhelpful for this (though they're a nice example of another railroaded feature, one that introduced a remotely exploitable vulnerability).  The issue is that just because node X accepted your transaction doesn't tell you that node Y, N hops away did or didn't, in particular it doesn't tell you if there is even a single miner anywhere in the network that rejected it-- what would you expect to avoid this? every node flooding a message for every transaction it rejects to every other node? (e.g. a rejection causing nodes^2 traffic???). Nodes due produce rejects today;  but it's not anything about anyone's opinion that prevents a guarantee there, the nature of a distributed/decenteralized system does. The whole notion of a reject being useful here is an artifact of erroneously trying to shoehorn in a model from centralized client/server systems into Bitcoin which is fundamentally unlike that.
OK. It makes sense that reject messages do not fit into decentralized systems in a meaningful way.

The comments about the filling up memory stuff are grimly amusing to me in general for reasons I currently can't discuss in public (please feel free to ping in six months).
Sure thing. I would be keen to learn more about this at that time.

Overall, I think the article does a good job of suggesting that that the goal of the recent blocksize proposal is a far more wide spanning change than just increment the blocksize to make a necessary room, and that its also a move to make a substantial change to the original long term security model to an underspecified one which doesn't involve fees; a trajectory for an unlimited blocksize that processes any transactions that come in, at any cost; even if that means centralizing the whole network onto a single mega-node in order to accept the scale.  Or at least that appears to be the only move that has an clear answer the case of 'there will be doom if the users make too many transactions' (The answer being that the datacenter adds more storage containers full of equipment).
Well. The goal of my interest in the block-size proposal is to see a sustained decay in confirmation times avoided, which would otherwise cause negative user experiences, of a great many users, negative publicity, and tarnishing Bitcoin in the mind of the future users who may then decide not to try it out. The whole of these together would probably reduce full node numbers even faster than larger blocks.

Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
May 08, 2015, 09:41:15 PM
 #6

About reject messages:  It's possible for a node to say that it is rejecting a new tx, but doing so doesn't reveal much w/r/t the  behavior of the whole network.  So that's not terribly useful.

The net effect of full blocks - soft limit or hard - is that confirmation time goes up and transaction confirmation becomes unreliable.  It takes longer if it happens and until that time you don't know whether it is going to happen or not.  This is a recipe for frustration.  People will HATE this with a capital H. 

There are two predictable results:  Higher fee rates and lower transaction volumes.  It is my opinion that frustrated users leaving would lower transaction volumes to a rate where miners cannot be supported by higher fee rates.  In fact I opine that frustrated users leaving would lower transaction volumes *so* much that fee rates would not sustainably increase at all.   If you want to support miners, you want very low fee rates on very high transaction volumes.

There is a market inaccuracy in the structure of Bitcoin:  The costs of the bandwidth and storage paid as fees are not being collected by the full node operators who provide that bandwidth and storage.   The block subsidy is paid for network security which is what the miners provide by hashing; but fees proportional to bandwidth and storage costs, in an accurate market, should be going to the people who are providing the bandwidth and storage.   Otherwise we can expect a "tragedy of the commons" situation to develop in the long run as miners are the only ones motivated to provide that bandwidth and storage.  One can expect that bandwidth provided by a miner will NOT serve to distribute tx to all nodes - only to the miner.  The result, at the extreme, would be that no one other than the miner will even see the tx in the memory pool until a block containing the tx is published. Miners might even have entirely disjoint sets of tx, none of which will ever confirm until or unless that particular miner gets a block.
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
June 02, 2015, 12:13:19 PM
 #7

The "Crash Landing" article has been an instrumental piece in convincing the reddit crowd toward bigger blocks.

Would any other devs like to weigh in?
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!