Bitcoin Forum
May 04, 2024, 06:33:25 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [27] 28 »
  Print  
Author Topic: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks  (Read 155474 times)
MooC Tals
Hero Member
*****
Offline Offline

Activity: 644
Merit: 500


View Profile
March 16, 2013, 03:05:17 AM
 #521

If there was a way of limiting the work being done network wide if you have an outdated miner/client. Forcing updates to bring hashes back up.

The older version after it expires it works like crippleware. I'm just giving a new perspective on solutions. I'm by no way anyone who understands this as an expert. If I'm understanding this correctly the transition to 0.8 was to slow right?

If that's the case then by crippling the older version and forcing the timely update would prevent older versions from competing with new versions and incentive's people on older versions to update quickly. Preventing a fork.

I could be wrong.
1714804405
Hero Member
*
Offline Offline

Posts: 1714804405

View Profile Personal Message (Offline)

Ignore
1714804405
Reply with quote  #2

1714804405
Report to moderator
1714804405
Hero Member
*
Offline Offline

Posts: 1714804405

View Profile Personal Message (Offline)

Ignore
1714804405
Reply with quote  #2

1714804405
Report to moderator
The trust scores you see are subjective; they will change depending on who you have in your trust list.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714804405
Hero Member
*
Offline Offline

Posts: 1714804405

View Profile Personal Message (Offline)

Ignore
1714804405
Reply with quote  #2

1714804405
Report to moderator
John Tobey
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 16, 2013, 03:27:30 AM
Last edit: March 16, 2013, 03:43:45 AM by John Tobey
 #522

That was my point actually. I was not accusing the developers of negligence I was saying that his platitudes are a form of "I could have done it better then them if I was them and could see the future". Well, he isn't them and they didn't see the future.
Then we are in violent agreement!  Hooray!

If the network ditches 0.7 then there is no problem.
As long as the network is predominantly 0.7 this would result in you creating a single orphaned block rather then a fork. While certainly a waste, this could happen while using 0.7 to mine as well (for a different reason).
All true, but I suspect you underestimate the, uh, conservatism that makes "the network ditching 0.7" non-trivial.  That may end up being the chosen way out, and Deepbit, Eligius, etc., will plan to upgrade at an appointed time.  Or another solution may emerge.  We've waited months for 0.8, and we can afford to wait a few more days or weeks, though the newly discovered 0.7 bug adds a reason to move.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
John Tobey
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 16, 2013, 03:40:43 AM
 #523

If there was a way of limiting the work being done network wide if you have an outdated miner/client. Forcing updates to bring hashes back up.

The older version after it expires it works like crippleware. I'm just giving a new perspective on solutions. I'm by no way anyone who understands this as an expert. If I'm understanding this correctly the transition to 0.8 was to slow right?

Well, yes and no.  True, if 90% had upgraded before the fork, the decision would probably have gone in their favour.  But the real problem was an unexpected incompatibility between versions.  There are (rarely) planned forks, and this was an unplanned fork.  Slush's pool didn't create the incompatible block because they believed a majority ran 0.8, they did it because they believed a large majority ran a version that would accept it.

If that's the case then by crippling the older version and forcing the timely update would prevent older versions from competing with new versions and incentive's people on older versions to update quickly. Preventing a fork.

I could be wrong.
This sort of crippling would be impossible given the circumstances (no central control, open source).  The best that you can do is urge users to upgrade... or downgrade, as the case may be.  ("rightgrade"?)

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
mp420
Hero Member
*****
Offline Offline

Activity: 501
Merit: 500


View Profile
March 16, 2013, 08:22:02 AM
 #524

All true, but I suspect you underestimate the, uh, conservatism that makes "the network ditching 0.7" non-trivial.

0.7 need not be ditched. Just tell everyone that after a certain date/block everyone must accept all valid blocks, whether by using 0.8 client or by configuring their databases so that an earlier client accepts them.

However, as it's by definition a hard fork, other hardfork-requiring changes to the protocol could be implemented at the same time. And THIS would most likely require everyone to upgrade.
muyuu
Donator
Legendary
*
Offline Offline

Activity: 980
Merit: 1000



View Profile
March 16, 2013, 11:56:43 AM
 #525

I am not talking about being on the safe side, I am talking about the general consensus what should be done or not done. Without patching anything.

Stay on 0.7 or move to 0.8 if you are _making_ blocks?



Eligius makes blocks, that was the point.

GPG ID: 7294199D - OTC ID: muyuu (470F97EB7294199D)
forum tea fund BTC 1Epv7KHbNjYzqYVhTCgXWYhGSkv7BuKGEU DOGE DF1eTJ2vsxjHpmmbKu9jpqsrg5uyQLWksM CAP F1MzvmmHwP2UhFq82NQT7qDU9NQ8oQbtkQ
Hawkix
Hero Member
*****
Offline Offline

Activity: 531
Merit: 505



View Profile WWW
March 16, 2013, 02:15:12 PM
 #526

Just a question - what would happen if the split was catched too late, allowing to get 100 confirmation for split-mined block to mature? Does it play any role, or just even larger reorg, like several hundred blocks, is still doable (with not too harsh consequences)?

Donations: 1Hawkix7GHym6SM98ii5vSHHShA3FUgpV6
http://btcportal.net/ - All about Bitcoin - coming soon!
Ichthyo
Hero Member
*****
Offline Offline

Activity: 602
Merit: 500


View Profile
March 16, 2013, 07:01:42 PM
 #527

Just a question - what would happen if the split was catched too late, allowing to get 100 confirmation for split-mined block to mature? Does it play any role, or just even larger reorg, like several hundred blocks, is still doable (with not too harsh consequences)?

Reorg will work itself out, unless a chain has been "pinned" by one of those hard-baked Checkpoints done by the Bitcoin devs, to make the "very ancient" history definite and unchangeable. Regarding what counts as harsh consequences -- well that is debatable.

Whenever someone spends coins on a branch, and that branch gets superseded, these transactions cease to be valid, so the receiver's balance ceases to exist. Now, if both branches are "basically compatible", the now unconfirmed transactions could be replayed in the new and now valid branch, thereby bringing the ceased balance of the receiver back into existence. (The reference client does this re-verification automatically, if I am informed right)

But this re-confirming of obsoleted transactions from the orphaned chain can not be done under certain circumstances:
  • when a transaction uses (even indirectly) a TX output which has been spent on another transaction on the new chain
    (this would be a "double spend")
  • when a transaction relies (even indirectly) on a TX output which does not or can not exist on the new chain.
    These would be the money base (block reward) transactions on the old chain since the split.

The second point explains the significance of the (roughly) 100 block limit.
taltamir
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
March 16, 2013, 07:04:46 PM
 #528

That was my point actually. I was not accusing the developers of negligence I was saying that his platitudes are a form of "I could have done it better then them if I was them and could see the future". Well, he isn't them and they didn't see the future.
Then we are in violent agreement!  Hooray!

If the network ditches 0.7 then there is no problem.
As long as the network is predominantly 0.7 this would result in you creating a single orphaned block rather then a fork. While certainly a waste, this could happen while using 0.7 to mine as well (for a different reason).
All true, but I suspect you underestimate the, uh, conservatism that makes "the network ditching 0.7" non-trivial.  That may end up being the chosen way out, and Deepbit, Eligius, etc., will plan to upgrade at an appointed time.  Or another solution may emerge.  We've waited months for 0.8, and we can afford to wait a few more days or weeks, though the newly discovered 0.7 bug adds a reason to move.


In retrospect you are right.
The best solution would be to make a new release that does both A and B concurrently.
That is, make a version of 0.8.1 that:
A. Has a block rejection algorithm that will reject non 0.7 compatible blocks - must be programmed first as such a thing does not exist
B. Is configured to not generate non 0.7 compatible blocks - functionality exists, it just needs to be configured.
Both of those can be set to expire after a certain date.

This would make v0.8.1 superior for mining than 0.7 because even though it rejects the same blocks 0.7 does, it has the advantage of not ever generating such blocks (which would be orphaned). Mining on 0.7 is thus more risky as there is a chance that you will generate a block that will be incompatible and orphaned.
taltamir
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
March 16, 2013, 07:06:27 PM
 #529

Just a question - what would happen if the split was catched too late, allowing to get 100 confirmation for split-mined block to mature? Does it play any role, or just even larger reorg, like several hundred blocks, is still doable (with not too harsh consequences)?

Someone correct me if I am wrong, but AFAIK transactions are not lost, they would be duplicated on both sides of the fork.
Its just that whomever mines on the losing fork ends up losing all their mining work.
Transactions could appear delayed for a few hours though.
Ichthyo
Hero Member
*****
Offline Offline

Activity: 602
Merit: 500


View Profile
March 16, 2013, 07:09:30 PM
 #530

Someone correct me if I am wrong, but AFAIK transactions are not lost, they would be duplicated on both sides of the fork.
Its just that whomever mines on the losing fork ends up losing all their mining work.
Transactions could appear delayed for a few hours though.
...see explanation 2 posts above. Point is, not only the reward itself, but every transaction which is based on this reward can not be re-verified on the other chain. Thus, this can evolve into an avalanche, given enough time passes after the split
muyuu
Donator
Legendary
*
Offline Offline

Activity: 980
Merit: 1000



View Profile
March 16, 2013, 09:42:41 PM
 #531

Someone correct me if I am wrong, but AFAIK transactions are not lost, they would be duplicated on both sides of the fork.
Its just that whomever mines on the losing fork ends up losing all their mining work.
Transactions could appear delayed for a few hours though.
...see explanation 2 posts above. Point is, not only the reward itself, but every transaction which is based on this reward can not be re-verified on the other chain. Thus, this can evolve into an avalanche, given enough time passes after the split

I can see no other fix than not including these transactions in the good chain. Thus, all related transactions would vanish from their recipient's balance, regardless of the level of taint.

This is quite radical, so I hope it doesn't happen.

GPG ID: 7294199D - OTC ID: muyuu (470F97EB7294199D)
forum tea fund BTC 1Epv7KHbNjYzqYVhTCgXWYhGSkv7BuKGEU DOGE DF1eTJ2vsxjHpmmbKu9jpqsrg5uyQLWksM CAP F1MzvmmHwP2UhFq82NQT7qDU9NQ8oQbtkQ
evilpete
Member
**
Offline Offline

Activity: 77
Merit: 10



View Profile
March 16, 2013, 10:10:36 PM
 #532

That was my point actually. I was not accusing the developers of negligence I was saying that his platitudes are a form of "I could have done it better then them if I was them and could see the future". Well, he isn't them and they didn't see the future.
Then we are in violent agreement!  Hooray!

If the network ditches 0.7 then there is no problem.
As long as the network is predominantly 0.7 this would result in you creating a single orphaned block rather then a fork. While certainly a waste, this could happen while using 0.7 to mine as well (for a different reason).
All true, but I suspect you underestimate the, uh, conservatism that makes "the network ditching 0.7" non-trivial.  That may end up being the chosen way out, and Deepbit, Eligius, etc., will plan to upgrade at an appointed time.  Or another solution may emerge.  We've waited months for 0.8, and we can afford to wait a few more days or weeks, though the newly discovered 0.7 bug adds a reason to move.


In retrospect you are right.
The best solution would be to make a new release that does both A and B concurrently.
That is, make a version of 0.8.1 that:
A. Has a block rejection algorithm that will reject non 0.7 compatible blocks - must be programmed first as such a thing does not exist

No such thing exists, and probably never will exist.  BDB doesn't have a reliable formula for calculating how many locks a given transaction takes.  Even BDB isn't consistent with itself..  When it rejects a block at runtime, it is capable of processing it a second time around when the memory state is different.

Quote
B. Is configured to not generate non 0.7 compatible blocks - functionality exists, it just needs to be configured.
Both of those can be set to expire after a certain date.

0.8 is already like this.  By default, neither 0.7 or 0.8 will generate blocks that 0.7 will reject.  Both 0.7 and 0.8 can be configured to make blocks that will break 0.7.  Given the right luck, a block could be generated that broke some 0.7's and not others.

The fork happened because somebody used the -blockmaxsize setting to change its behavior and create valid, full sized blocks.

Quote
This would make v0.8.1 superior for mining than 0.7 because even though it rejects the same blocks 0.7 does, it has the advantage of not ever generating such blocks (which would be orphaned). Mining on 0.7 is thus more risky as there is a chance that you will generate a block that will be incompatible and orphaned.

There is no silver bullet.  The only robust solution is to have people either:
1) update to 0.8+ (except for the large mining pools).
or
2) update to a (yet to be released) fixed 0.6 or 0.7 that increases the lock tables such that this will never happen again
or
3) manually raise the lock table limits, via DB_CONFIG.

Once the merchants and end users have had time to do their updates, then the mining pools can fix their end.

People who are mining as part of a pool should still update.  What matters is what their pool is running, not what you're running.

0.8.1 can't fix this problem. The mining pool hashpower is holding back to buy people time to upgrade.

First they ignore you, then they laugh at you, then they fight you, then you win.
- Mahatma Gandhi
dscotese
Sr. Member
****
Offline Offline

Activity: 444
Merit: 250


I prefer evolution to revolution.


View Profile WWW
March 16, 2013, 11:38:31 PM
 #533

Just a question - what would happen if the split was catched too late, allowing to get 100 confirmation for split-mined block to mature? Does it play any role, or just even larger reorg, like several hundred blocks, is still doable (with not too harsh consequences)?

This question fascinates me.  Both chains would continue getting built, and everyone who pays attention would have to be really unobservant in order not to notice that every block eventually has an orphan version.  But let's say we all got stoned and didn't notice.  Then what?

As long as enough miners stayed on the 0.8 chain that accepted the large valid block, that chain would be the valid chain (but only according to the clients that accepted it).  0.7 clients would see the other chain as valid and the network for all clients would seem to be getting spammed by a lot of invalid blocks.

How many orphaned blocks will the reference client remember in case a reorg comes up?  I'm guessing all of them?

If enough miners were on the 0.7 chain, it would eventually catch up and do a reorg (as actually happened).  This would force all the transactions that "used to be good" but no longer are to get re-broadcast to get into the new chain.  There's another question:  Wouldn't most of them be in both chains already?

I like to provide some work at no charge to prove my valueAvoid supporting terrorism!
Satoshi Nakamoto: "He ought to find it more profitable to play by the rules."
hgmichna
Hero Member
*****
Offline Offline

Activity: 695
Merit: 500


View Profile
March 17, 2013, 09:33:46 AM
 #534

[...] As long as enough miners stayed on the 0.8 chain that accepted the large valid block, that chain would be the valid chain (but only according to the clients that accepted it).  0.7 clients would see the other chain as valid […]

So the bitcoin technology still has a possibly serious technical flaw. It cannot cope well with certain software defects.

I don't know the technology very well. I don't know what reorg means, for example. I would have hoped that bitcoin clients can automatically detect a fork and decide for one of the branches. If I understand things correctly, that does not work in this case, because the 0.7.x clients reject and thus do not take into account a branch that begins with a block they consider illegal.

If I see this correctly, then there is no easy and obvious solution to this problem, except perhaps to make the block structure so simple that provably error-free software can be produced to handle it.

The only other solution I can see is baby-sitting by the developer and miner community. It has worked this time; perhaps it will work similarly well for the whole future of bitcoin.
Stephen Gornick
Legendary
*
Offline Offline

Activity: 2506
Merit: 1010


View Profile
March 17, 2013, 10:00:05 AM
Last edit: March 17, 2013, 11:37:39 AM by Stephen Gornick
 #535

This would force all the transactions that "used to be good" but no longer are to get re-broadcast to get into the new chain.

Nope.  Those v0.8 clients simply added those transactions from the orphaned block back into the memory pool.  It does not re-broadcast them.  I don't even think the node that sent the transaction will re-broadcast the transaction immediately upon the block being orphaned -- it probably gets treated like normal, at some random point (e.g., over the next half hour) it will re-broadcast it because it has no confirmations.  [Edit: the sending node was a v0.7 node so it had no concept of orphaned v0.8 blocks, so it should've just simply re-broadcast periodically until confirmed.]

There's another question:  Wouldn't most of them be in both chains already?

Take the 211.9093 BTC (~ $10K) transaction to OKPay (12814b8ad57ce5654ba69eb26a52ddae1bff42093ca20cef3ad96fe7fd85d195).  It had been broadcast and then included in the v0.8 side in block 225,446 (00000000000000757de9173ee2f01c9f957c8aa3b4f71b901b0fa19683ab1fa1) but did not confirm on the v0.7 side, presumably because the transaction had inputs that hadn't even confirmed yet on the v0.7 side.  So on the v0.7 after the transaction had been broadcast and relayed you had new v0.7 mining nodes starting up.   But they don't pull transactions from peers, they only listen for broadcasts.

So that's a normal situation where these new v0.7 mining nodes are ignorant of a double-spending race attack potentially being attempted.  Only BTC Guild knows if and when that node had received (12814b8ad57ce5654ba69eb26a52ddae1bff42093ca20cef3ad96fe7fd85d195) and if it had then why their node (running v0.7 by then) no longer knew of it when it mined the double spend (762443f6373b7c8b3833d4ad23578fc3099cc29b86d1359d0c0565e3c8614f91) more than six and a half hours later.  It could be that the node was just spun up and the memory pool just hadn't received that transaction yet.  I might suspect that the memory pool for that node was filling and as a result was housekeeping by flushing older and lower priority transactions out.  This was an intentional double spend (even if the losses to the merchant were repaid voluntarily by the "attacker" at a later time) and was carried out by running a script to broadcast the double spend continuously every ten seconds.   The initial transaction was submitted through Blockchain.info (though I'm not sure if that was as a raw transaction through their pushtx or if it was through their API (or normal web interface maybe even).  It is possible that it was a fire and forget action, where the transaction wasn't re-broadcasted ever again.

What might have been useful is if there was some hard fork recovery approach in which the memory pool was initialized using the list of transactions from a set of blocks (the ones that would be orphaned on the v0.8 side).  If any blocks arriving had transactions that included a double spend of the recovery transactions those blocks would be ignored.  This mode would continue until all transactions from that initial set were either included in blocks or rejected for being invalid.  Then the client would switch back to normal and bring in the transactions that had queued up while the node was in the recovery mode.  As long as enough hashing capacity followed this "deferential recovery" mode the recovery might have been nearly as quick and no merchants would have had confirmed transactions on the v0.8 side that ended up not confirming on the v0.7 side when the transaction was valid on both.  That's not a violation of the protocol but instead is just a manual override mode that would help protect Bitcoin's prime directive #3 -- transactions having six confirmations or more on the longest chain are never reversed.


Unichange.me

            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █


dscotese
Sr. Member
****
Offline Offline

Activity: 444
Merit: 250


I prefer evolution to revolution.


View Profile WWW
March 17, 2013, 11:04:14 PM
 #536

[...] As long as enough miners stayed on the 0.8 chain that accepted the large valid block, that chain would be the valid chain (but only according to the clients that accepted it).  0.7 clients would see the other chain as valid […]

So the bitcoin technology still has a possibly serious technical flaw. It cannot cope well with certain software defects.
What the miners did by switching their hashing power from the 0.8 chain to the 0.7 chain is intentionally abandon the correct chain in order to allow users with version 0.7 to continue using their software without seeing what amounted to invalid confirmations (from other miners who were already on the 0.7 chain, and erroneously ignoring a valid block).

I don't see it as a serious technical flaw.  There is a serious technical flaw in 0.7, and miners now know how to avoid it.  The deeper problem to contend with is that people can use old software that can provide confirmations to possibly invalid transactions if they are added to a block by a miner after there is another hard fork.  So just upgrade to 0.8, or, as you say, baby-sit so you know when every block is getting orphaned.

The only other solution I can see is baby-sitting by the developer and miner community. It has worked this time; perhaps it will work similarly well for the whole future of bitcoin.
Yes, baby-sitting.  Setting things up and then ignoring them because you believe they are perfect is dangerous and foolish.  It's the ease with which we fix things that is important.  Convention says that the 0 in 0.7 (and 0.8 ) means that the bitcoin-qt reference client is still in beta.

I have a more administrative idea, which is to piggy-back some kind of notice on the peer discovery work that the client does.  The notice would generate a list of "problems that are less prevalent in the other side's version" so that once a client gets a number of peers telling it about an important problem that will be 'less prevalent" in a different version, it can inform the user.

As  community, we have accepted miners colluding (however openly) to abandon one chain in order to accommodate a flaw in a client in widespread use.  Any powerful interests threatened by bitcoin might start working on exploiting such flaws.  If this bothers any merchants, their software can watch for consecutive orphan blocks as that would be a sign that currently valid blocks might not contain all valid transactions, and pause for user input.  In other words, we can make the software do some babysitting.

Ok, but those last two ideas are just trying to get around the fact that we should be babysitting anyway.  And we are.

I like to provide some work at no charge to prove my valueAvoid supporting terrorism!
Satoshi Nakamoto: "He ought to find it more profitable to play by the rules."
caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
March 18, 2013, 09:17:04 AM
 #537

it obviously did, but with the entire network rejecting said blocks they caused orphans instead of a fork.

Did 0.7 generate blocks that 0.7 rejected?

Theoretically it could have. I don't know how it would work out though. Would the generator have a BDB error when trying to persist his own block and then abandon it, or would it manage to attempt a propagation and then get rejected by everybody else?

That said, the demand for such large blocks is recent. Probably the event of generating such problematic block never happened before simply because there was no demand for it.
tvbcof
Legendary
*
Offline Offline

Activity: 4592
Merit: 1276


View Profile
March 19, 2013, 12:35:45 AM
 #538

...


Love the graphic (enough to break my rule of thumb about not re-posting them through a quote...)  Two things though:

1) I don't sense that everyone agrees with directive #1.  Or at least the degree to which it is important varies widely even among those who can conceptualize high traffic systems and the magnitude of various economies.

2) The elephant in the room would be directive #4.  You know...the 'classified' one.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
March 19, 2013, 12:50:27 AM
 #539

1) I don't sense that everyone agrees with directive #1.  Or at least the degree to which it is important varies widely even among those who can conceptualize high traffic systems and the magnitude of various economies.

Perhaps the perceived degree of support for #1 is really the question of what defines centralization (conversely decentralization):

Consider 10,000 nodes of which 1,000 are mining, all owned by individuals. Does that have the same decentralization as 10,000 nodes (1,000 mining) owned by companies, organizations and co-operatives?


markm
Legendary
*
Offline Offline

Activity: 2940
Merit: 1090



View Profile WWW
March 19, 2013, 01:43:09 AM
 #540

I suspect for the most part no, if such corporate entities are registered / listed so governments can go knock on their doors.

It is similar to how having a list of people who hold gun licenses impacts decentralisation of guns. First it lets them go knock on doors, later it also can become a moat preventing "undesirables" from even being allowed a license, eventually those who do have a license welcome deeper and deeper moat to keep competitors at bay and so on.

Meanwhile big brother knows where they all live.

Except the "undesirables" who to the extent they continue to exist and use the technology are the real bastion of decentralisation.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [27] 28 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!