Bitcoin Forum
November 08, 2024, 09:48:40 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 [9] 10 »  All
  Print  
Author Topic: The MAX_BLOCK_SIZE fork  (Read 35596 times)
justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1013



View Profile
March 13, 2013, 01:48:27 PM
 #161

The <= 0.7 Satoshi implementation wasn't capable of dealing with such blocks.
It's worse than that. If < 0.8 implementations all behaved the exact same way you could call their behavior a de facto standard, but since their behavior was not consistent between nodes of the exact same version you can't even say that. The behavior of all implementations prior to 0.8 with regards to the protocol is non-deterministic.
caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
March 13, 2013, 01:57:20 PM
 #162

It's worse than that. If < 0.8 implementations all behaved the exact same way you could call their behavior a de facto standard,

I'd still call it a bug. Actually, it's still an unsolved bug. We've only worked around it, but it's still there.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
March 14, 2013, 03:25:53 AM
 #163

How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?
Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!

How often do you get the chance to work on a potentially world-changing project?
alir
Member
**
Offline Offline

Activity: 215
Merit: 11



View Profile
March 14, 2013, 03:36:27 AM
 #164

Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!
Now, now, Gavin. Play nice. ;)
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2301


Chief Scientist


View Profile WWW
March 14, 2013, 03:47:20 AM
 #165

If Mr. Augustus found some documentation that I don't know about it, I genuinely want to know about it, because it will save me time. Right now I'm throwing blocks at an instrumented v0.7.2 bitcoind that tells me how many locks are taken, so I can be sure whatever fix we implement will always work.

How often do you get the chance to work on a potentially world-changing project?
alir
Member
**
Offline Offline

Activity: 215
Merit: 11



View Profile
March 14, 2013, 04:03:06 AM
 #166

I don't believe he does Mr. Andresen.

But now that the bug is resolved, will there still be an update to allow increased block sizes? I'm not understanding how the team intends to allow older versions to accept larger blocks from 0.8+.

Will a full-scale 0.8 update be forced?
Atruk
Hero Member
*****
Offline Offline

Activity: 700
Merit: 500



View Profile
March 14, 2013, 05:23:15 AM
 #167

How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?
Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!


Per operation is difficult to calculate. I don't know much about how the bitcoin code interacts with the BDB, so much of what I am going to write can be taken with a grain of salt. In the next few weeks I plan on printing out chunks of the Bitcoin source code and trying to understand how it works with the Database as a personal enrichment exercise, and I'll share anything I find that might be useful. I can't promise anything material, but when my plate clears I can try to make an effort (for whatever all of those qualifiers are worth).

Here's Oracle's rather worthless documentation of locking and blocking in BDB. Normally there's one lock per database transaction, but it can get messy because of lock contention. What it sound like you are looking for though is better documentation. Such a thing might exist but Oracle...

A real max block size fix is probably going to have to involve picking a distant future deadline and hardforking things over to LevelDB. This might involve the painful experience of making a LevelDB version of 0.3.x to accompany newer clients, but LevelDB and BDB are different enough in their quirks that leaving BDB for LevelDB will probably mean abandoning BDB wholesale. How many years have gone by without a good WordPress port for PostgreSQL?

The deadline to move to a LevelDB bitcoin client might have to be set in the very distant future, maybe even at the next halving day. BDB isn't sufficiently well documented to the point where its quirks can be accounted for in any other database software for all but simple cases (Oracle probably likes it this way and BDB's is locking contention is probably a selling point for their other database products), and if Bitcoin has demonstrated anything this far into its life, Bitcoin attracts fringe use cases.

markm
Legendary
*
Offline Offline

Activity: 3010
Merit: 1121



View Profile WWW
March 14, 2013, 07:23:00 AM
 #168

This simply shows yet again that, sorry, the implementation is not and cannot be the specification.

That functioning in the heterogeneous wild might be a "hard problem" might suck, but probably is really a stronger call for standards and specifications than functioning in the land of fairies and unicorns might be.

The more sure it is that we'll never never actually be able to be fully up to spec, the more important it probably is to actually have a spec to aspire to.

If we don't even have "second star to the right and straight on 'til morning" things might not bode well, but if we do have it, ahhh, then, the land of never never might not actually be so very bad.

So please, lets just admit we aren't up to spec and have never been up to spec, and focus on getting there instead of pretending that "the world is after all perfect, if requiring a little adjustment" is a bastion of software engineering.

(Just because it is true doesn't mean it is computer science. Smiley)

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
March 14, 2013, 07:26:19 AM
 #169

But now that the bug is resolved,

It is not. It's been worked around. But we still have this bug limiting blocks to 5k transactions top, when they should be able to handle more.

will there still be an update to allow increased block sizes? I'm not understanding how the team intends to allow older versions to accept larger blocks from 0.8+.

Will a full-scale 0.8 update be forced?

I don't see other way around it.

The deadline to move to a LevelDB bitcoin client might have to be set in the very distant future, maybe even at the next halving day.

hehe, dude, we'll have to abandon BDB in a few weeks. Or at least abandon this configuration which limits it to 5k transactions. We're already bumping on it. The only reason we had this fork was precisely because 5k tx per block is already not enough sometimes.
Atruk
Hero Member
*****
Offline Offline

Activity: 700
Merit: 500



View Profile
March 14, 2013, 07:43:20 AM
 #170

The deadline to move to a LevelDB bitcoin client might have to be set in the very distant future, maybe even at the next halving day.

hehe, dude, we'll have to abandon BDB in a few weeks. Or at least abandon this configuration which limits it to 5k transactions. We're already bumping on it. The only reason we had this fork was precisely because 5k tx per block is already not enough sometimes.

The problem with BDB is actually worse than that... 5k database transactions isn't necessarily 5k bitcoin transactions. BDB is a monster that needs to be killed with fire.

caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
March 14, 2013, 07:49:54 AM
 #171

Yes, my point was that we definitely cannot wait until the next halving day to kill this beast, or it will kill us instead. Wink
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1073



View Profile
March 14, 2013, 10:34:57 AM
 #172

If Mr. Augustus found some documentation that I don't know about it, I genuinely want to know about it, because it will save me time. Right now I'm throwing blocks at an instrumented v0.7.2 bitcoind that tells me how many locks are taken, so I can be sure whatever fix we implement will always work.
The number of locks required depends not only on the number of transactions in the block (N) but also on the size of the blkindex.dat (S). Even if you have a fixed upper limit on N, the S grows without an upper bound.

The most common requested lock is "read with intent to write". As the process descends down the B-tree it takes one of those locks for each page traversed until it reaches the leaf page. Then it promotes the last lock to the "write lock". So the number of locks required to insert single Bitcoin transaction is O(H) where H is the height of the B-tree. This is turn is O(log_b S) where log_b is number of k,v-pairs per page of the B-tree.

So for a single database transaction which consists of all Bitcoin transactions in the block the number of locks required is O(N * log_b S). This number has no upper bound, it logaritmically increases with the size of blkindex.dat.

I'm sorry I can't give a good reference. It is in some of the Springer's series "Lecture Notes in Computer Science", something related to concurrency and deadlock detection/avoidance in database systems. The only thing I remember now is that was one of the thickest LNiCS volumes on the whole shelf with LNiCS.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
March 14, 2013, 10:49:07 AM
 #173

Argh... BDB doesn't allow for an unlimited number of locks? (how many your system can handle, that is)
It really needs to specify a limit?
markm
Legendary
*
Offline Offline

Activity: 3010
Merit: 1121



View Profile WWW
March 14, 2013, 11:08:19 AM
Last edit: March 15, 2013, 05:00:37 AM by markm
 #174

Argh... BDB doesn't allow for an unlimited number of locks? (how many your system can handle, that is)
It really needs to specify a limit?

Well, legend has it that once upon a time some systems had more things to do than just lock pages of databases, so, in short, yes.

Much as we have a maximum block size... Wink

So hey, y'know, maybe that darn magic number actually is a darn specification not an implementation artifact after all?

No, wait... it merely means that it is not only the maximum size of the blocks but also the maximum size of the block index at a given time (as measured in block height) that is a crucial limit to specify; and, happily enough, it turns out that the max size of a block has a relatively predictable effect upon the size of the block index at any given "height"?

-MarkM-

EDIT: Do we need to account for orphan blocks, too? Thus need to specify an orphan rate tolerance?

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
marcus_of_augustus
Legendary
*
Offline Offline

Activity: 3920
Merit: 2349


Eadem mutata resurgo


View Profile
March 14, 2013, 08:38:29 PM
 #175

How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?
Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!


Well that's implementation specific, as the tutorial states, (posted above), sorry can't be more helpful but I am looking into it so you'll be first to know if I find anything.

Pre-0.8 bitcoin have specified the limits for their BDB implementation in db.cpp lines 82 and 83

Code:
    
dbenv.set_lk_max_locks(10000);
dbenv.set_lk_max_objects(10000);

Most definitely a limitation but not a bug nor an "unknown behaviour" is the main point.  Unfortunately this is a proxy limitation on bitcoin block sizes, only loosely correlated with block data size, that we will have to live with for now, and we have lived with for some time now.

Maybe look at basing transaction fees on the number of locks a transaction uses is another angle for a solution? Loosely specifying "1MB" as the block limit is in fact abstracting a data size away from the actual physical configuration of how that data is stored/accessed which is what actually defines the total transaction cost, includes CPU cycles, disk read/writes and storage.


markm
Legendary
*
Offline Offline

Activity: 3010
Merit: 1121



View Profile WWW
March 14, 2013, 10:51:03 PM
 #176

Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly, for that matter. If number of simultaneous accesses to the data enters into the spec it ought more to lean toward "as many as possible, even if that means having more than one complete copy of the entire dataset in existence on the planet at any given moment".

Basically, bitcoin is intended to give lots of entities access to the same data, so locks are actually anathema to its entire purpose and goal.

Thus while acknowledging the brilliance of your solution I find myself forced to say sorry but it still seems to me that number of database locks in a single corporation or other entity's living executing implementation or copy of an implementation is one of the many things that should if at all possible be abstracted away by the second star to the right and straight on 'til morning specification.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 668
Merit: 500



View Profile
March 15, 2013, 01:38:04 AM
 #177

Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly, for that matter. If number of simultaneous accesses to the data enters into the spec it ought more to lean toward "as many as possible, even if that means having more than one complete copy of the entire dataset in existence on the planet at any given moment".

Basically, bitcoin is intended to give lots of entities access to the same data, so locks are actually anathema to its entire purpose and goal.

Thus while acknowledging the brilliance of your solution I find myself forced to say sorry but it still seems to me that number of database locks in a single corporation or other entity's living executing implementation or copy of an implementation is one of the many things that should if at all possible be abstracted away by the second star to the right and straight on 'til morning specification.

-MarkM-

Not only that, but it's easy to imagine implementations where verification is done in a single-threaded process that is isolated and does nothing else (and hence locks of any kind are entirely unnecessary), and any persistent db is maintained separately.

It's a shame the original bitcoin implementation was a monolothic piece of poor code.  The lack of clean separation of responsibilities is really hindering its progress and testing.
marcus_of_augustus
Legendary
*
Offline Offline

Activity: 3920
Merit: 2349


Eadem mutata resurgo


View Profile
March 15, 2013, 04:55:33 AM
 #178

Quote
Maybe look at basing transaction fees on the number of locks a transaction uses is another angle for a solution? Loosely specifying "1MB" as the block limit is in fact abstracting a data size away from the actual physical configuration of how that data is stored/accessed which is what actually defines the total transaction cost, includes CPU cycles, disk read/writes and storage.

Quote
Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly ...

Quite. My suggestion was just an example, a hint towards engaging in some lateral thinking on the exact nature of the underlying problem we are facing here. It could be mem. locks (I hope not), it could be physical RAM space, it could mem. accesses, CPU cycles, total HD storage space occupied network-wide.

Point being, we are searching for a prescription for the physical atomic limitation, as a network rule, that needs to priced by the market of miners validating transactions and the nodes storing them, so that fees are paid and resources allocated correctly in such a way that the network scales, in the way that we already know that it theoretically can.

If we are going to hard fork, lets make sure it is for a justifiable, quantifiable reason. Or we could be merely embarking on holy grail pursuit for the 'best' DB upgrades to keep scaling up, endlessly.

Bitcoin implementations could be DB agnostic if it were to use the right metric. Jeff Garzik has some good ideas like a "transactions accessed" metric, as I'm sure others do also. Maybe some kind scale-independent transactions accessed block limit and fee scheduling rule?

bbulker
Full Member
***
Offline Offline

Activity: 124
Merit: 100


View Profile
March 15, 2013, 10:05:37 AM
 #179

Have only read OP post, but are you saying that if everyone in the world tried to do a transaction right now that it would take 30+ years to verify them all (assuming the hardware and software remained unchanged)? Wow!  Shocked
markm
Legendary
*
Offline Offline

Activity: 3010
Merit: 1121



View Profile WWW
March 15, 2013, 10:35:19 AM
 #180

Have only read OP post, but are you saying that if everyone in the world tried to do a transaction right now that it would take 30+ years to verify them all (assuming the hardware and software remained unchanged)? Wow!  Shocked

Ha ha, nice way of looking at it. I won't presume to check your math, but really, even if you dropped or picked up an order of magnitude that still sounds like its lucky for us that not everyone in the world is on the internet yet.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
Pages: « 1 2 3 4 5 6 7 8 [9] 10 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!