if the block size limit were raised to 1MB, then this free transaction capaticy will likely become 128KB, so that more transactions become free
The capacity will become whatever the miners are willing to set it at, no more. If they want more fees nobody is forcing them to accept free transactions.
|
|
|
Right now, I not planning on re-arranging my life to take an unpaid position as a bitcoin developer. You don't need to be unpaid. If your abilities are as good as you claim you should be able to find plenty of people willing to donate. For example, just look at how quickly the OSX packaging for Armory bounty was raised.
|
|
|
recently This is based on your vast and extensive month of experience since registering?
|
|
|
This all started when the lead developer issued an "off the cuff" suggestion to enlarge the block size from 250K to 500K. A miner went the extra step and enlarged the block to 1M, which *should* have been legal. But there wasn't enough system and integration testing. [There was *none* as far as I can tell with respect to this suggested change.] Perhaps the community will learn to avoid untested changes in the future. It wasn't the size of the block that caused the problem. The presence of a 250k arbitrary block limit prevented the underlying problem of the db from being discovered until that moment when a perfect storm composed out of a db upgrade and the limit removal created two almost equal miner bases. This could have easily been avoided by simply not relying on magic numbers. It will ever be the case that whenever code contains arbitrary numbers based on arbitrary assumptions that code will break spectacularly.
That is not correct. v0.7 nodes accepted a 1MB block on testnet. The issue was more complex then just the size of the block. By the protocol the block which was rejected by some nodes SHOULD have been accepted. The 250kb soft limit was simply a default for block construction. Even with it in place nodes should have accepted properly constructed blocks up to the 1MB limit. It also appears not all v0.7 nodes were affected. Some accepted the problem block and some didn't. The defect/limit in BDB wasn't documented and didn't occur in all versions/platforms. It appears the number of transactions being changed as a result of the block validation crossed some "magic code" not in the Bitcoin source code but undocumented in some implementations of BDB. v0.7 reliance on BDB caused it to be fundamentally broken/flawed. Its actions weren't consistent with the documented protocol, the higher level source code, or anyone's understanding/expectation of what should have happened. Nobody was saying/thinking oh yeah if you make a block with more than x transaction it will abort, cause a rollback, and result in a rejection. It was a landmine.
|
|
|
If you'd like to contribute, testing is an area where we can basically have an infinite amount of additional resources and put them all to good use.
If I set up a node on testnet to CPU mine and just left it would that be helpful, or does it need to be monitored?
|
|
|
I just wish I could access blockchain.info. Apparently some interaction between their DDoS protection and my internet connection prevents me from accessing their site 90% of the time.
|
|
|
The presence of a 250k arbitrary block limit prevented the underlying problem of the db from being discovered until that moment when a perfect storm composed out of a db upgrade and the limit removal created two almost equal miner bases. Maybe it wasn't the size of the block that was the problem. https://bitcointalk.org/index.php?topic=152131.msg1614374#msg1614374
|
|
|
Seems odd that 0.7 hasn't gotten a new block for 45 minutes. Each chain has less hashing power than before the split but is still operating on the old difficulty.
|
|
|
If they did that, all 0.7 clients would have been left dead in the water, unable to rejoin the main chain and forcing end users to upgrade on the spot. 0.8 clients can rejoin the 0.7 chain so this was why it was preferred.
In hindsight I bet it will turn out that sticking with 0.8 would have resulted in a more rapid resolution. 0.8 already had a majority of hashing power, and now that some of the miners have downgraded it looks like the 0.7 chain is ahead but just barely. We spent several hours in the worst case of a near 50/50 split in hashing power and it's taking a long time for the new branch to pull ahead.
|
|
|
There was, indeed, testing on the testnet with a full (1 MB) block. This was accepted by both the 0.7 and 0.8 versions. There is no concern here.
Slush's block should have produced the same valid block. However, the block was structured carefully as to expose a problem in 0.7 that was never discovered. So the problem was a pathological block, not simply a large block.
|
|
|
0.7 refuses good data, 0.8 doesn't put bad data in the blockchain... What is good and bad data is determined by majority of nodes and miners. Basically Atlas was right on spot about new database engine breaking things, even if it broke them by fixing what already was broken... If it works don't fix it! A majority of the hashing power was on 0.8 before they decided to downgrade. By your definition at the moment of the fork the 0.8 version was the good version.
|
|
|
But what I actually meant is that it is a bit surprising that the BerkeleyDB backed versions where not tested to a maximum block size (which, as I understand it, is expressed as a #DEFINE, and which, again in my understanding, was not changed between 0.7 and 0.8.) Maybe they were tested, but the bug only appears in certain versions of BDB. Since a lot of people compile their own node software who knows how many versions of BDB operating nodes are linked against? We probably won't know for sure until they have time to do a full postmortem.
|
|
|
Personally, I agree 100% that they shoudl have pressed on with v0.8, with that knowledge. However, as I am not involved in the dev sphere, then I am going to defer to their collective judgement on the best course of action, which was to revert. +1
|
|
|
No amount of testing? Really? Seems like a slightly better than par proficiency with BerkeleyDB would have sufficed, particularly as the block size has been under relatively discussion. Finally. But I'm only going by what I know of the issue by skimming. 0.8 doesn't use BDB, so no amount of testing of that version would have found the problem, since the problem only exists in versions prior to 0.8.
|
|
|
Is there any way us mere mortals without minig rigs can help? if we all come online with version 0.7 will it help or just create excessive traffic? If you're not mining there is nothing you can do to help, and there's no reason to downgrade to 0.7 if you already upgraded earlier.
|
|
|
The difference between the former and the latter is that I didn't say anything then, because back then nobody of any consequence gave two shits about your little bullshit project here. Then do something about it. Your business depends on this infrastructure so since you're so big you should be able to spare some investment into fixing the problems, not just identifying them.
|
|
|
Yes, fixed now.
Fixed as in the chain containing the large block is now officially orphaned?
|
|
|
|