Bitcoin Forum
May 28, 2024, 11:24:28 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [21] 22 23 24 25 26 27 28 »
  Print  
Author Topic: Alert: chain fork caused by pre-0.8 clients dealing badly with large blocks  (Read 155476 times)
DoomDumas
Legendary
*
Offline Offline

Activity: 1002
Merit: 1000


Bitcoin


View Profile
March 12, 2013, 04:16:07 PM
 #401

Its official, i guess SD just broke bitcoin.  Thanks eric, for all those unspendable outputs and massive block sizes.   I'm sure your apologists will claim that this event is good for bitcoin because we stress tested it.  Fuck that noise.  If it had taken longer to reach this point every one would have been running 0.8 or newer, the issue caused by old clients could not have happened.

I blame SD.  SD pushed our beta product way too far.  Shame on eric and his greedy little BS company.  I hope its stocks tanks.  I hope miners filter out the 1Dice.  Fuck that noise!

And you still quote Eric in your sig ;P
HappyScamp
Sr. Member
****
Offline Offline

Activity: 314
Merit: 250



View Profile
March 12, 2013, 04:19:11 PM
 #402

Thanks Peter for the info and the prominent posting!

jgarzik
Legendary
*
Offline Offline

Activity: 1596
Merit: 1091


View Profile
March 12, 2013, 04:22:44 PM
 #403

So is anything over 250 soft limit not safe, or not over 500, or not over? Or has that not been determined yet?

The limitation is not related to block size, but rather number of transactions accessed/updated by a block.

In general, the default for 0.8 (250k) is fine.  Raising that to 500k is likely just as fine.

But it is not very simple to answer the question, because this 0.7 BDB lock issue becomes more complicated with larger BDB transactions.


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
wtfvanity
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


WTF???


View Profile
March 12, 2013, 04:23:28 PM
 #404

So is anything over 250 soft limit not safe, or not over 500, or not over? Or has that not been determined yet?

The limitation is not related to block size, but rather number of transactions accessed/updated by a block.

In general, the default for 0.8 (250k) is fine.  Raising that to 500k is likely just as fine.

But it is not very simple to answer the question, because this 0.7 BDB lock issue becomes more complicated with larger BDB transactions.



TY for the clarification.

          WTF!     Don't Click Here              
          .      .            .            .        .            .            .          .        .     .               .            .             .            .            .           .            .     .               .         .              .           .            .            .            .     .      .     .    .     .          .            .          .            .            .           .              .     .            .            .           .            .               .         .            .     .            .            .             .            .              .            .            .      .            .            .            .            .            .            .             .          .
ProfMac
Legendary
*
Offline Offline

Activity: 1246
Merit: 1001



View Profile
March 12, 2013, 04:45:12 PM
 #405

So, can we all agree that rushing things to take care of the volume of transactions that are mostly comprised of satoshidice spam isn't a good idea and it's SD who needs to change and not bitcoin?
Nope.

I'm not saying it shouldn't be fixed. Just saying that rushing fixes isn't the way to go, as can be seen from this major fuckup.


Is there a link to "the best" thread to place the future SD impact discussion?

I try to be respectful and informed.
tvbcof
Legendary
*
Offline Offline

Activity: 4592
Merit: 1276


View Profile
March 12, 2013, 05:04:40 PM
 #406

...

Also, I'm afraid it's very easy to say "just test for longer" but the reason we started generating larger blocks is that we ran out of time. We ran out of block space and transactions started stacking up and not confirming (ie, Bitcoin broke for a lot of its users). Rolling back to the smaller blocks simply puts us out of the frying pan and back into the fire.

We will have to roll forward to 0.8 ASAP. There isn't any choice.

I would like to understand with better precision what you mean by this.  Can you point to a particularly enlightening bit of documentation or discussion about this issue?

From your brief description, it seems to me that this is one of the most show-stopper of deficiencies in Bitcoin design.

This because no matter whether the block chain remains small and universally accessible, or grows huge and accessible only to those with state of the art data processing systems, it can always be expected that we run up against block size limits at certain time periods.  If doing so causes a 'blood clot' in the system, it seems to me that a high priority line of development should be in figuring out how to dismiss the stagnant transactions such that they don't cause ongoing issues.

I cannot see switching to a more efficient database being anything but an uphill foot-race which will ultimately be lost even if Bitcoin evolves to a situation where it is only run by cluster-type 'supernodes' with fat pipe connectivity and the blockchain in RAM.  Even if we do go this direction, sorting out the 'backed up transactions' issue while the system is yet small seems like a good idea.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
EuroTrash
Hero Member
*****
Offline Offline

Activity: 728
Merit: 500



View Profile
March 12, 2013, 05:06:22 PM
 #407

I blame SD.  SD pushed our beta product way too far.  Shame on eric and his greedy little BS company.  I hope its stocks tanks.  I hope miners filter out the 1Dice.  Fuck that noise!

This is witch hunting. And unfair to SD. Which provided and provides a great service to this community.

<=== INSERT SMART SIGNATURE HERE ===>
Timo Y
Legendary
*
Offline Offline

Activity: 938
Merit: 1001


bitcoin - the aerogel of money


View Profile
March 12, 2013, 05:08:55 PM
 #408

I'm kind of glad this happened.

Firstly, it's better now than 3-5 years from now. We want evolutionary pressure that gradually leads to a battle hardened Bitcoin, but we don't want extinction events.

Secondly, it illustrates an important principle of Bitcoin in practice:

Social convention trumps technical convention

The implications are that even if a fatal protocol flaw is discovered in future, and even if there is a 51% attack, people will not lose their bitcoins, as long as the community reaches consensus on how to change the rules.

GPG ID: FA868D77   bitcoin-otc:forever-d
piotr_n
Legendary
*
Offline Offline

Activity: 2053
Merit: 1354


aka tonikt


View Profile WWW
March 12, 2013, 05:09:54 PM
 #409

...

Also, I'm afraid it's very easy to say "just test for longer" but the reason we started generating larger blocks is that we ran out of time. We ran out of block space and transactions started stacking up and not confirming (ie, Bitcoin broke for a lot of its users). Rolling back to the smaller blocks simply puts us out of the frying pan and back into the fire.

We will have to roll forward to 0.8 ASAP. There isn't any choice.

I would like to understand with better precision what you mean by this.  Can you point to a particularly enlightening bit of documentation or discussion about this issue?
I believe he means that if you have a constant rate of 6 blocks/hour and a fixed number of max-transaction-per-block, when the number of transactions is going up, they eventually go above the "bandwidth" limit (which is: 6 * max-tx-in-block / hour) and instead of being mined at the time when they are announced, they are getting queued, to be mined later...
Which is exactly what I have been observing for the last few weeks - even transaction with a proper fee needed like hours to be mined.
And this is very bad.
Bitcoin really needs to start handling bigger blocks - otherwise soon our transactions will need ages to get confirmed.
The network will just jam, if we stay at the old limit.

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
bitcoinBull
Legendary
*
Offline Offline

Activity: 826
Merit: 1001


rippleFanatic


View Profile
March 12, 2013, 05:17:10 PM
 #410

So my question is who did this effect negatively & who took the hit because of this?

Someone had to lose a good amount of BTC yesterday.

> Eleuthria: ~1500 BTC lost in 24 hours from this

http://bitcoinstats.com/irc/bitcoin-dev/logs/2013/03/12

He lost BTCguild's entire hot wallet in under 60 seconds. But that was due to a different issue with the way his pool software messed up while upgrading to 0.8 (miners were suddenly being credited at difficulty=1). It was a separate issue from the blockchain fork.

College of Bucking Bulls Knowledge
pwrgeek
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
March 12, 2013, 05:17:57 PM
 #411

Not sure why everyone is so panicked.  We only orphaned 25 blocks and the only danger was that you would accept the coins minted in those blocks (all transactions using other coins would eventually end up in the other chain as well).  If we just followed the network rules and waited 100 blocks to accept minted coins then there was actually no danger at all.  What am I missing?  Seems to me like the devs and the pools worked together to quickly address the issue and that there is a plan to move forward with a permanent fix. Just my $.02.
piotr_n
Legendary
*
Offline Offline

Activity: 2053
Merit: 1354


aka tonikt


View Profile WWW
March 12, 2013, 05:20:49 PM
 #412

Not sure why everyone is so panicked.  We only orphaned 25 blocks and the only danger was that you would accept the coins minted in those blocks (all transactions using other coins would eventually end up in the other chain as well).  If we just followed the network rules and waited 100 blocks to accept minted coins then there was actually no danger at all.  What am I missing? 
You are missing the fact that it was a great opportunity to double spend any coins.
Once: you send 1000 BTC paying to merchant that uses bitcoin 0.8
Second: you pay the same 1000 BTC to another merchant who has an older client and thus is "looking" at the alternate branch, where the 1000 BTC has not been spent yet.

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
tvbcof
Legendary
*
Offline Offline

Activity: 4592
Merit: 1276


View Profile
March 12, 2013, 05:23:35 PM
 #413

...

Also, I'm afraid it's very easy to say "just test for longer" but the reason we started generating larger blocks is that we ran out of time. We ran out of block space and transactions started stacking up and not confirming (ie, Bitcoin broke for a lot of its users). Rolling back to the smaller blocks simply puts us out of the frying pan and back into the fire.

We will have to roll forward to 0.8 ASAP. There isn't any choice.

I would like to understand with better precision what you mean by this.  Can you point to a particularly enlightening bit of documentation or discussion about this issue?
I believe he means that if you have a constant rate of 6 blocks/hour and a fixed number of the maximum transaction per block, when the number of transaction is going up they will eventually go above the limit and instead of being mined at the time when they are made, they are getting queued...
Which is exactly what I have been observing for the last few weeks - even transaction with proper fees are mined like hours later.

Bitcoin really needs to start handling bigger blocks - otherwise soon our transactions will need ages to get confirmed.
The network will get stuck if we stay at the limit.

So the solution is to continue to increase the block size as demand provokes this issue then?  It does not strike me as a particularly good strategy for a lot of reasons.  Among them, I can envision demand side growth growing exponentially and at a much faster rate than processing capacity can be developed even by highly capitalized and proficient entities.

The only up-side to this solution in the not to distant future only a handful of large entities will have problems because only they will be forming critical infrastructure part of the Bitcoin network.  In fact it could be legitimately argued that we are already at that point due to the makeup of the mining pools.  At the time of this last issue they seem cooperative and in favor of the dev team's desired 'fix'.  Or at least the consensus of the current dev team.  What happens in future 'events' will be interesting to observe.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
knightmb
Sr. Member
****
Offline Offline

Activity: 308
Merit: 258



View Profile WWW
March 12, 2013, 05:25:08 PM
 #414

I believe he means that if you have a constant rate of 6 blocks/hour and a fixed number of max-transaction-per-block, when the number of transactions is going up, they eventually go above the "bandwidth" limit (which is: 6 * max-tx-in-block / hour) and instead of being mined at the time when they are announced, they are getting queued, to be mined later...
Which is exactly what I have been observing for the last few weeks - even transaction with a proper fee needed like hours to be mined.
And this is very bad.
Bitcoin really needs to start handling bigger blocks - otherwise soon our transactions will need ages to get confirmed.
The network will just jam, if we stay at the old limit.
I don't believe the design specs for bitcoin will allow this, it just isn't possible to scale it without a complete redesign of how the internals work. Kind of the reason this problem has shown up now while other theoretical problems will come in the near future.  Sad

Timekoin - The World's Most Energy Efficient Encrypted Digital Currency
gyverlb
Hero Member
*****
Offline Offline

Activity: 896
Merit: 1000



View Profile
March 12, 2013, 05:27:22 PM
 #415

Not sure why everyone is so panicked.  We only orphaned 25 blocks and the only danger was that you would accept the coins minted in those blocks (all transactions using other coins would eventually end up in the other chain as well).  If we just followed the network rules and waited 100 blocks to accept minted coins then there was actually no danger at all.  What am I missing?  
You are missing the fact that it was a great opportunity to double spend any coins.
Once: you send 1000 BTC paying to merchant that uses bitcoin 0.8
Second: you pay the same 1000 BTC to another merchant who has an older client and thus is "looking" at the alternate branch, where the 1000 BTC has not been spent yet.
This is not so simple: the transaction you send to the first merchant is seen by all bitcoind nodes running 0.7.

So unless I missed something about tx management in Bitcoin nodes, to mount a double spend you must both:
  • have this fork happening so that 6 confirmations on the 0.8 fork doesn't really mean anything
  • propagate both transactions at the same time targeting 0.8 nodes with the first and 0.7 with the second, more or less blindly hoping that the first will reach a node run by the next 0.8 miner to find a block and the second a node run by the next 0.7 miner to mine a block.

P2pool tuning guide
Trade BTC for €/$ at bitcoin.de (referral), it's cheaper and faster (acts as escrow and lets the buyers do bank transfers).
Tip: 17bdPfKXXvr7zETKRkPG14dEjfgBt5k2dd
piotr_n
Legendary
*
Offline Offline

Activity: 2053
Merit: 1354


aka tonikt


View Profile WWW
March 12, 2013, 05:28:22 PM
 #416

So the solution is to continue to increase the block size as demand provokes this issue then?
I guess.. because what else? Are you going to appeal to people to do less transactions, hoping that it would solve the problem? Smiley
I don't believe you can i.e. convince satoshidice to drop down their lucrative business, just because other ppl's transactions are getting queued...
Why should they care anyway, if they pay the same fees as you do?
Since we don't have other solution at hand, scaling up the storage limits seems to be the only option ATM.
Unless we're OK with increasing the fees?

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
piotr_n
Legendary
*
Offline Offline

Activity: 2053
Merit: 1354


aka tonikt


View Profile WWW
March 12, 2013, 05:29:41 PM
 #417

Not sure why everyone is so panicked.  We only orphaned 25 blocks and the only danger was that you would accept the coins minted in those blocks (all transactions using other coins would eventually end up in the other chain as well).  If we just followed the network rules and waited 100 blocks to accept minted coins then there was actually no danger at all.  What am I missing? 
You are missing the fact that it was a great opportunity to double spend any coins.
Once: you send 1000 BTC paying to merchant that uses bitcoin 0.8
Second: you pay the same 1000 BTC to another merchant who has an older client and thus is "looking" at the alternate branch, where the 1000 BTC has not been spent yet.
This is not so simple: the transaction you end to the first merchant is seen by all bitcoind nodes running 0.7.

So unless I missed something about tx management in Bitcoin nodes, to mount a double spend you must both:
  • have this fork happening so that 6 confirmations on the 0.8 fork doesn't really mean anything
  • propagate both transactions at the same time targeting 0.8 nodes with the first and 0.7 with the second, more or less blindly hoping that the first will reach a node run by the next 0.8 miner to find a block and the second a node run by the next 0.7 miner to mine a block.
Yes, you are right.
I am not saying that it was easy - and that is why we don't know about anyone who was able to take advantage of it, during the incident.
But it was definitely possible - and thus the panic.

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1068



View Profile
March 12, 2013, 05:36:07 PM
Last edit: March 12, 2013, 05:58:47 PM by 2112
 #418

The result is that 0.7 (by default, it can be tweaked manually) will not accept "too large" blocks (we don't yet know what exactly causes it, but it is very likely caused by many transactions in the block).
The "manual tweak" is exactly two lines. Anyone can apply it, because the recompilation is not necessary. All it takes is to create a short text file and restart the bitcoin client.

https://bitcointalk.org/index.php?topic=152208.0

Edit: I'm including the fix here because some users can't easily click through with their mobile browsers:

Just create the file named "DB_CONFIG" in the ".bitcoin" or "AppData/Roaming/Bitcoin" directory that contains the following:
Code:
set_lg_dir database
set_lk_max_locks 40000

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
wtfvanity
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


WTF???


View Profile
March 12, 2013, 05:41:23 PM
 #419

The result is that 0.7 (by default, it can be tweaked manually) will not accept "too large" blocks (we don't yet know what exactly causes it, but it is very likely caused by many transactions in the block).
The "manual tweak" is exactly two lines. Anyone can apply it, because the recompilation is not necessary. All it takes is to create a short text file and restart the bitcoin client.

https://bitcointalk.org/index.php?topic=152208.0


Block size isn't the problem, it's the BDB.

The limitation is not related to block size, but rather number of transactions accessed/updated by a block.

In general, the default for 0.8 (250k) is fine.  Raising that to 500k is likely just as fine.

But it is not very simple to answer the question, because this 0.7 BDB lock issue becomes more complicated with larger BDB transactions.

          WTF!     Don't Click Here              
          .      .            .            .        .            .            .          .        .     .               .            .             .            .            .           .            .     .               .         .              .           .            .            .            .     .      .     .    .     .          .            .          .            .            .           .              .     .            .            .           .            .               .         .            .     .            .            .             .            .              .            .            .      .            .            .            .            .            .            .             .          .
SgtSpike
Legendary
*
Offline Offline

Activity: 1400
Merit: 1005



View Profile
March 12, 2013, 05:47:27 PM
 #420

Its official, i guess SD just broke bitcoin.  Thanks eric, for all those unspendable outputs and massive block sizes.   I'm sure your apologists will claim that this event is good for bitcoin because we stress tested it.  Fuck that noise.  If it had taken longer to reach this point every one would have been running 0.8 or newer, the issue caused by old clients could not have happened.

I blame SD.  SD pushed our beta product way too far.  Shame on eric and his greedy little BS company.  I hope its stocks tanks.  I hope miners filter out the 1Dice.  Fuck that noise!
To be clear, those outputs are perfectly spendable if other coins are sent at the same time.

I present this example:
https://blockchain.info/tx/0895a6fa923d399f5079c5a444a70a7543b5c34ebe4a5d21ae522350042b311e

This was a ZERO FEE transaction.  The default Bitcoin-QT software included transactions of varying size, all the way down to 0.00000003 BTC.  As far as I know, I don't have any unspent Satoshis in either of those addresses, but this just proves the point that tiny amounts are most definitely spendable in the right situation.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [21] 22 23 24 25 26 27 28 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!