Ari
Member
Offline
Activity: 75
Merit: 10
|
|
February 11, 2013, 05:22:12 PM |
|
As much as I would like to see some sort of constraint on blockchain bloat, if this is significantly curtailed then I suspect S.DICE shareholders will invest in mining. I suspect that getting support from miners was a large part of the motivation for taking S.DICE public, as there clearly wasn't a need to raise capital.
|
|
|
|
zebedee
Donator
Hero Member
Offline
Activity: 668
Merit: 500
|
|
February 25, 2013, 04:36:34 AM |
|
Why don't we just let miners to decide the optimal block size?
If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.
I think this is exactly the right thing to do. There is still the question of what the default behavior should be. Here is a proposal: Ignore blocks that take your node longer than N seconds to verify. I'd propose that N be: 60 seconds if you are catching up with the blockchain. 5 seconds if you are all caught-up. But allow miners/merchants/users to easily change those defaults. Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant. Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network." Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example). I'm a bit late to this discussion, but I'm glad to see that an elastic, market-based solution is being seriously considered by the core developers. It is clearly the right way to go to balance the interests of all concerned parties. Free markets are naturally good at that.
|
|
|
|
misterbigg
Legendary
Offline
Activity: 1064
Merit: 1001
|
|
February 25, 2013, 04:51:10 AM |
|
It is clearly the right way to go to balance the interests of all concerned parties. Free markets are naturally good at that.
By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
February 25, 2013, 05:02:54 AM Last edit: February 25, 2013, 05:38:54 AM by solex |
|
It is clearly the right way to go to balance the interests of all concerned parties. Free markets are naturally good at that.
By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks. And that type of argument takes us nowhere. There have been thousands of comments on the subject and we need to close in on a solution rather than spiral away from one. I have seen your 10 point schedule for what happens when the 1Mb blocks are saturated. There is a some probability you are right, but it is not near 100%, and if you are wrong then the bitcoin train hits the buffers. Please consider this and the next posting: https://bitcointalk.org/index.php?topic=144895.msg1556506#msg1556506I am equally happy with Gavin's solution which zebedee quotes. Either is better than letting a huge unknown risk become a real event.
|
|
|
|
caveden
Legendary
Offline
Activity: 1106
Merit: 1004
|
|
February 25, 2013, 08:24:37 AM |
|
By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.
The block subsidy will be determined by the free market once inflation is no longer relevant, iff the block size limit is dropped. Even Bitcoin inflation itself in a sense may one day be determined by the free market, if we start seeing investment assets quoted in Bitcoin being traded with high liquidity: such highly-liquid BTC-quoted assets would end up being used in trades, and would become a flexible monetary aggregate. Fractional reserves is not the only way to do it. Concerning the time between blocks, there have been proposals of ways to make such parameter fluctuate according to supply and demand. I think it was Meni Rosomething, IIRC, who came up once with such ideas. Although potentially feasible, that's a technical risk that might not be worthy taking. Perhaps some alternative chain will try it one day, and if it really shows itself worthwhile as a feature, people might consider it for Bitcoin, why not. I'm just not sure it's that important, 10 min seems to be fine enough.
|
|
|
|
justusranvier
Legendary
Offline
Activity: 1400
Merit: 1013
|
|
February 28, 2013, 06:57:05 PM |
|
There is still the question of what the default behavior should be. Here is a proposal:
Ignore blocks that take your node longer than N seconds to verify.
I'd propose that N be: 60 seconds if you are catching up with the blockchain. 5 seconds if you are all caught-up. But allow miners/merchants/users to easily change those defaults.
Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.
Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."
Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example). Using this proposal all nodes could select for themselves what block size they are willing to accept. The only part that is missing is to communicate this information to the rest of the network somehow. Each node could keep track of the ratio of transaction size to verification time averaged over a suitable interval. Using that number it could calculate the maximum block size likely to meet the time constraint, and include that maximum block size in the version string it reports to other nodes. Then miners could make educated decisions about what size of blocks the rest of the network will accept.
|
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
March 12, 2013, 07:18:54 PM |
|
Watching.
Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
March 12, 2013, 09:03:51 PM Last edit: March 13, 2013, 03:06:55 AM by solex |
|
Watching.
Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.
Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor. Edit 0.7 until 0.8.1 is available.General question. Is Deepbit too conservative for its own good? They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!
|
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
March 13, 2013, 02:34:15 AM |
|
Watching.
Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.
Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor. General question. Is Deepbit too conservative for its own good? They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong! No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't. Rushing everyone onto 0.8 is asking for problems. Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?
|
|
|
|
notme
Legendary
Offline
Activity: 1904
Merit: 1002
|
|
March 13, 2013, 02:58:28 AM |
|
Watching.
Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.
Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor. General question. Is Deepbit too conservative for its own good? They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong! Please give the development team time to put together a plan. If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.
|
|
|
|
ArticMine
Legendary
Offline
Activity: 2282
Merit: 1050
Monero Core Team
|
|
March 13, 2013, 03:08:36 AM |
|
Please give the development team time to put together a plan. If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.
+1
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
March 13, 2013, 03:16:25 AM |
|
No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.
Rushing everyone onto 0.8 is asking for problems.
Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?
Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD. It appears 60% of the network would have recognized the problem block. If more people were prepared to upgrade in a timely manner then it might have been closer to 90% and a minor issue arguably leaving a better situation than exists now. Please give the development team time to put together a plan. If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.
+1 Yes. I agree with that because of where the situation is now.
|
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
March 13, 2013, 03:34:33 AM |
|
No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.
Rushing everyone onto 0.8 is asking for problems.
Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?
Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD. For the last time IT WAS NOT A BUG! http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do. Like I said, you do not fully understand the problem so are not qualified to comment any further.
|
|
|
|
caveden
Legendary
Offline
Activity: 1106
Merit: 1004
|
|
March 13, 2013, 07:31:43 AM |
|
Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.
For the last time IT WAS NOT A BUG! http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do. Come on, such obscure limit was not known by anyone in Bitcoin-world up until it blew yesterday. You may claim it's not a bug on BDB side*, what's arguable, but it is definitely a bug on bitcoin implementation side. Everybody should be able to handle blocks up until 1Mb. That was the general agreement, the protocol spec if you will. The particular implementation of Satoshi client <= 0.7 was not capable of following such protocol specification as it should. 0.8 onward was. If anything, 0.8 is the "correct version". Bringing everybody back to 0.7 was an "emergency plan" since pushing everybody to 0.8 was believed to be much harder to accomplish (and likely truly would be). * And bug or not, the fact that nobody here even knew about it just shows how much we cannot rely on BDB - not a single person among all the brilliant minds on the core dev team understands fully how this thing works (and neither did Satoshi) . Moving out of BDB is certainly a desirable thing. Now with this even more crippling block size limit, it's pretty much urgent.
|
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
March 13, 2013, 08:28:07 AM |
|
Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.
For the last time IT WAS NOT A BUG! http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do. Come on, such obscure limit was not known by anyone in Bitcoin-world up until it blew yesterday. You may claim it's not a bug on BDB side*, what's arguable, but it is definitely a bug on bitcoin implementation side. Everybody should be able to handle blocks up until 1Mb. That was the general agreement, the protocol spec if you will. The particular implementation of Satoshi client <= 0.7 was not capable of following such protocol specification as it should. 0.8 onward was. If anything, 0.8 is the "correct version". Bringing everybody back to 0.7 was an "emergency plan" since pushing everybody to 0.8 was believed to be much harder to accomplish (and likely truly would be). * And bug or not, the fact that nobody here even knew about it just shows how much we cannot rely on BDB - not a single person among all the brilliant minds on the core dev team understands fully how this thing works (and neither did Satoshi) . Moving out of BDB is certainly a desirable thing. Now with this even more crippling block size limit, it's pretty much urgent. How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency? The fact that the devs (or anyone) seems to have never read the documentation of the standard dependencies is more the worry, in my opinion.
|
|
|
|
markm
Legendary
Offline
Activity: 2996
Merit: 1121
|
|
March 13, 2013, 08:39:59 AM Last edit: March 13, 2013, 02:51:22 PM by markm |
|
I was told on #bitcoin-dev that actually the devs have met the BDB configuration numbers before, and to look at db.cpp to see where in the bitcoin code they explicitly set the numbers they want BDB to use.
Also, they they ran into problems with it before.
So supposedly they were not unaware that BDB can be configured. They even confided that the page size BDB uses is by default the block size of the underlying block device (the disk drive, for example).
So from the sound if it they simply had not set the configuration numbers high enough to accomodate all platforms, or maybe all possible sizes of blockchain reorganisation. (During a re-org, apparently, it needs enough locks to deal with two blocks at once in one BDB-transaction?)
-MarkM-
|
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
March 13, 2013, 09:08:28 AM Last edit: March 13, 2013, 09:52:01 AM by marcus_of_augustus |
|
I was told on #bitcoin-dev that actually the devs have met the BDB configuration numbers before, and to look at db.cpp to see where in the bitcoin code the explicitly set the numbers they want BDB to use.
-MarkM-
Removed incorrect, thnx Jeff.
|
|
|
|
markm
Legendary
Offline
Activity: 2996
Merit: 1121
|
|
March 13, 2013, 09:37:18 AM Last edit: March 13, 2013, 09:54:26 AM by markm |
|
Weird, 0.8 uses leveldb not BDB doesn't it?
Does leveldb use those same calls to set its configuration?
-MarkM-
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
March 13, 2013, 09:47:41 AM |
|
Okay line 83 in db.cpp appears to have been changed from 0.7 to 0.8 ... this is exactly where the incompatibility was introduced. [...] So seems like an unannounced change to an implicit protocol rule.
Incorrect. 0.8 does not use BDB for blockchain indexing. Those BDB settings you quote are only relevant to the wallet.dat file in 0.8. The BDB lock limitation simply does not exist in 0.8, because leveldb is used for blockchain indexing, not BDB.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
caveden
Legendary
Offline
Activity: 1106
Merit: 1004
|
|
March 13, 2013, 01:38:02 PM |
|
How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?
The fact that the devs (or anyone) seems to have never read the documentation of the standard dependencies is more the worry, in my opinion.
A non-understood limitation of an implementation dependency does not define the protocol. To the Bitcoin protocol, blocks up until 1Mb are allowed. That was the consensus, that was what every documentation available said. The <= 0.7 Satoshi implementation wasn't capable of dealing with such blocks. That implementation was bugged, not the 0.8.
|
|
|
|
|