Bitcoin Forum
April 24, 2024, 10:31:35 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 [8] 9 10 »  All
  Print  
Author Topic: The MAX_BLOCK_SIZE fork  (Read 35542 times)
Ari
Member
**
Offline Offline

Activity: 75
Merit: 10


View Profile
February 11, 2013, 05:22:12 PM
 #141

As much as I would like to see some sort of constraint on blockchain bloat, if this is significantly curtailed then I suspect S.DICE shareholders will invest in mining.  I suspect that getting support from miners was a large part of the motivation for taking S.DICE public, as there clearly wasn't a need to raise capital.
1713997895
Hero Member
*
Offline Offline

Posts: 1713997895

View Profile Personal Message (Offline)

Ignore
1713997895
Reply with quote  #2

1713997895
Report to moderator
Even in the event that an attacker gains more than 50% of the network's computational power, only transactions sent by the attacker could be reversed or double-spent. The network would not be destroyed.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713997895
Hero Member
*
Offline Offline

Posts: 1713997895

View Profile Personal Message (Offline)

Ignore
1713997895
Reply with quote  #2

1713997895
Report to moderator
1713997895
Hero Member
*
Offline Offline

Posts: 1713997895

View Profile Personal Message (Offline)

Ignore
1713997895
Reply with quote  #2

1713997895
Report to moderator
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 668
Merit: 500



View Profile
February 25, 2013, 04:36:34 AM
 #142

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).


I'm a bit late to this discussion, but I'm glad to see that an elastic, market-based solution is being seriously considered by the core developers.

It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 25, 2013, 04:51:10 AM
 #143

It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.

By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
February 25, 2013, 05:02:54 AM
Last edit: February 25, 2013, 05:38:54 AM by solex
 #144

It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.

By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.


And that type of argument takes us nowhere. There have been thousands of comments on the subject and we need to close in on a solution rather than spiral away from one. I have seen your 10 point schedule for what happens when the 1Mb blocks are saturated. There is a some probability you are right, but it is not near 100%, and if you are wrong then the bitcoin train hits the buffers.

Please consider this and the next posting:
https://bitcointalk.org/index.php?topic=144895.msg1556506#msg1556506

I am equally happy with Gavin's solution which zebedee quotes. Either is better than letting a huge unknown risk become a real event.

caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
February 25, 2013, 08:24:37 AM
 #145

By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.

The block subsidy will be determined by the free market once inflation is no longer relevant, iff the block size limit is dropped. Even Bitcoin inflation itself in a sense may one day be determined by the free market, if we start seeing investment assets quoted in Bitcoin being traded with high liquidity: such highly-liquid BTC-quoted assets would end up being used in trades, and would become a flexible monetary aggregate. Fractional reserves is not the only way to do it.

Concerning the time between blocks, there have been proposals of ways to make such parameter fluctuate according to supply and demand. I think it was Meni Rosomething, IIRC, who came up once with such ideas. Although potentially feasible, that's a technical risk that might not be worthy taking. Perhaps some alternative chain will try it one day, and if it really shows itself worthwhile as a feature, people might consider it for Bitcoin, why not. I'm just not sure it's that important, 10 min seems to be fine enough.
justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1009



View Profile
February 28, 2013, 06:57:05 PM
 #146

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).
Using this proposal all nodes could select for themselves what block size they are willing to accept. The only part that is missing is to communicate this information to the rest of the network somehow.

Each node could keep track of the ratio of transaction size to verification time averaged over a suitable interval. Using that number it could calculate the maximum block size likely to meet the time constraint, and include that maximum block size in the version string it reports to other nodes. Then miners could make educated decisions about what size of blocks the rest of the network will accept.
marcus_of_augustus
Legendary
*
Offline Offline

Activity: 3920
Merit: 2348


Eadem mutata resurgo


View Profile
March 12, 2013, 07:18:54 PM
 #147

Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
March 12, 2013, 09:03:51 PM
Last edit: March 13, 2013, 03:06:55 AM by solex
 #148

Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.
Edit 0.7 until 0.8.1 is available.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!


marcus_of_augustus
Legendary
*
Offline Offline

Activity: 3920
Merit: 2348


Eadem mutata resurgo


View Profile
March 13, 2013, 02:34:15 AM
 #149

Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!



No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?

notme
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


View Profile
March 13, 2013, 02:58:28 AM
 #150

Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!



Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.

https://www.bitcoin.org/bitcoin.pdf
While no idea is perfect, some ideas are useful.
ArticMine
Legendary
*
Offline Offline

Activity: 2282
Merit: 1050


Monero Core Team


View Profile
March 13, 2013, 03:08:36 AM
 #151


Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.

+1

Concerned that blockchain bloat will lead to centralization? Storing less than 4 GB of data once required the budget of a superpower and a warehouse full of punched cards. https://upload.wikimedia.org/wikipedia/commons/8/87/IBM_card_storage.NARA.jpg https://en.wikipedia.org/wiki/Punched_card
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
March 13, 2013, 03:16:25 AM
 #152


No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?

Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.
It appears 60% of the network would have recognized the problem block. If more people were prepared to upgrade in a timely manner then it might have been closer to 90% and a minor issue arguably leaving a better situation than exists now.


Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.

+1

Yes. I agree with that because of where the situation is now.


marcus_of_augustus
Legendary
*
Offline Offline

Activity: 3920
Merit: 2348


Eadem mutata resurgo


View Profile
March 13, 2013, 03:34:33 AM
 #153


No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?

Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.


For the last time IT WAS NOT A BUG!

http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html

0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do.

Like I said, you do not fully understand the problem so are not qualified to comment any further.

caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
March 13, 2013, 07:31:43 AM
 #154

Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.


For the last time IT WAS NOT A BUG!

http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html

0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do.

Come on, such obscure limit was not known by anyone in Bitcoin-world up until it blew yesterday. You may claim it's not a bug on BDB side*, what's arguable, but it is definitely a bug on bitcoin implementation side.
Everybody should be able to handle blocks up until 1Mb. That was the general agreement, the protocol spec if you will. The particular implementation of Satoshi client <= 0.7 was not capable of following such protocol specification as it should. 0.8 onward was. If anything, 0.8 is the "correct version". Bringing everybody back to 0.7 was an "emergency plan" since pushing everybody to 0.8 was believed to be much harder to accomplish (and likely truly would be).

* And bug or not, the fact that nobody here even knew about it just shows how much we cannot rely on BDB - not a single person among all the brilliant minds on the core dev team understands fully how this thing works (and neither did Satoshi) .
Moving out of BDB is certainly a desirable thing. Now with this even more crippling block size limit, it's pretty much urgent.
marcus_of_augustus
Legendary
*
Offline Offline

Activity: 3920
Merit: 2348


Eadem mutata resurgo


View Profile
March 13, 2013, 08:28:07 AM
 #155

Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.


For the last time IT WAS NOT A BUG!

http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html

0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do.

Come on, such obscure limit was not known by anyone in Bitcoin-world up until it blew yesterday. You may claim it's not a bug on BDB side*, what's arguable, but it is definitely a bug on bitcoin implementation side.
Everybody should be able to handle blocks up until 1Mb. That was the general agreement, the protocol spec if you will. The particular implementation of Satoshi client <= 0.7 was not capable of following such protocol specification as it should. 0.8 onward was. If anything, 0.8 is the "correct version". Bringing everybody back to 0.7 was an "emergency plan" since pushing everybody to 0.8 was believed to be much harder to accomplish (and likely truly would be).

* And bug or not, the fact that nobody here even knew about it just shows how much we cannot rely on BDB - not a single person among all the brilliant minds on the core dev team understands fully how this thing works (and neither did Satoshi) .
Moving out of BDB is certainly a desirable thing. Now with this even more crippling block size limit, it's pretty much urgent.

How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?

The fact that the devs (or anyone) seems to have never read the documentation of the standard dependencies is more the worry, in my opinion.

markm
Legendary
*
Offline Offline

Activity: 2940
Merit: 1090



View Profile WWW
March 13, 2013, 08:39:59 AM
Last edit: March 13, 2013, 02:51:22 PM by markm
 #156

I was told on #bitcoin-dev that actually the devs have met the BDB configuration numbers before, and to look at db.cpp to see where in the bitcoin code they explicitly set the numbers they want BDB to use.

Also, they they ran into problems with it before.

So supposedly they were not unaware that BDB can be configured. They even confided that the page size BDB uses is by default the block size of the underlying block device (the disk drive, for example).

So from the sound if it they simply had not set the configuration numbers high enough to accomodate all platforms, or maybe all possible sizes of blockchain reorganisation. (During a re-org, apparently, it needs enough locks to deal with two blocks at once in one BDB-transaction?)

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
marcus_of_augustus
Legendary
*
Offline Offline

Activity: 3920
Merit: 2348


Eadem mutata resurgo


View Profile
March 13, 2013, 09:08:28 AM
Last edit: March 13, 2013, 09:52:01 AM by marcus_of_augustus
 #157

I was told on #bitcoin-dev that actually the devs have met the BDB configuration numbers before, and to look at db.cpp to see where in the bitcoin code the explicitly set the numbers they want BDB to use.

-MarkM-


Removed incorrect, thnx Jeff.

markm
Legendary
*
Offline Offline

Activity: 2940
Merit: 1090



View Profile WWW
March 13, 2013, 09:37:18 AM
Last edit: March 13, 2013, 09:54:26 AM by markm
 #158

Weird, 0.8 uses leveldb not BDB doesn't it?

Does leveldb use those same calls to set its configuration?

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1596
Merit: 1091


View Profile
March 13, 2013, 09:47:41 AM
 #159

Okay line 83 in db.cpp appears to have been changed from 0.7 to 0.8 ... this is exactly where the incompatibility was introduced.
[...]
So seems like an unannounced change to an implicit protocol rule.

Incorrect.  0.8 does not use BDB for blockchain indexing.   Those BDB settings you quote are only relevant to the wallet.dat file in 0.8.

The BDB lock limitation simply does not exist in 0.8, because leveldb is used for blockchain indexing, not BDB.


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
March 13, 2013, 01:38:02 PM
 #160

How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?

The fact that the devs (or anyone) seems to have never read the documentation of the standard dependencies is more the worry, in my opinion.

A non-understood limitation of an implementation dependency does not define the protocol. To the Bitcoin protocol, blocks up until 1Mb are allowed. That was the consensus, that was what every documentation available said. The <= 0.7 Satoshi implementation wasn't capable of dealing with such blocks. That implementation was bugged, not the 0.8.
Pages: « 1 2 3 4 5 6 7 [8] 9 10 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!