Bitcoin Forum
May 25, 2024, 06:24:32 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 »
121  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 19, 2013, 04:04:17 PM
The changes in the last year were "soft forks" -- forks that required all miners to upgrade (if they don't, their blocks are ignored), but that do not require merchants/users to upgrade.
For this change the distinction is hardly relevant, since it won't happen unless the merchants/users who run full nodes upgrade first.

A soft and hard for are not comparable.

In a soft fork, it is about getting the _majority_ of _miners_ behind the rule. Every piece of old software keeps working. Depending on the change, it may be advisable for merchants to upgrade to get the extra rules enforced, but those who don't just get dropped to SPV-level security. Nothing will break as long as somewhat more than 50% of hash power enforces the new rule.

In a hard fork, it is about getting _all_ of _everyone_ to change the rule at exactly the same time. Doing a hard fork where not everyone is on the same side, is an outright disaster. Every coin that existed before the fork will be spendable once on every side of the chain. If this happens, it is economic suicide for the system. Sure it may recover after a short while, when people realize to pick the side that most others chose, but it is not something I want to see happening.

The only way a hard fork can be done, is when there is reasonable certainty that all players in the network agree.
122  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 10:02:52 PM
First of all, my opinion: I'm in favor of increasing the block size limit in a hard fork, but very much against removing the limit entirely. Bitcoin is a consensus of its users, who all agreed (or will need to agree) to a very strict set of rules that would allow people to build global decentralized payment system. I think very few people understand a forever-limited block size to be part of these rules.

However, with no limit on block size, it effectively becomes miners who are in control of _everyone_'s block size. As a non-miner, this is not something I want them to decide for me. Perhaps the tragedy of the commons can be avoided, and long-term rational thinking will kick in, and miners can be trusted with choosing an appropriate block size. But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...).

I think retep raises very good points here: the block size (whether voluntarily or enforced) needs to result in a system that remains verifiable for many. What those many are will probably change gradually. Over time, more and more users will probably move to SPV nodes (or more centralized things like e-wallet sites), and that is fine. But if we give up the ability for non-megacorp entities to be able to verify the chain, we might as well be using those a central clearinghouse. There is of course wide spectrum between "I can download the entire chain on my phone" and "Only 5 bank companies in the world can run a fully verifying node", but I think it's important that we choose what point in between there is acceptable.

My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth. This would mean no for-eternity limited size, but also no way for miners to push up block sizes to the point where they are in sole control of the network. I realize that some people will consider this an arbitrary and unnecessary limit, but others will probably consider it dangerous already. In any case, it's a compromise and I believe one will be necessary.

Great posts from Mike and Gavin in this thread. There's indeed no reason to panic over "too much centralization". Actually, setting an arbitrary limit (or an arbitrary formula to set the limit) is the very definition of "central planning", while letting it get spontaneously set is the very definition of "decentralized order".

Then I think you misunderstand what a hard fork entails. The only way a hard fork can succeed is when _everyone_ agrees to it. Developers, miners, merchants, users, ... everyone. A hard fork that succeeds is the ultimate proof that Bitcoin as a whole is a consensus of its users (and not just a consensus of miners, who are only given authority to decide upon the order of otherwise valid transactions).

Realize that Bitcoin's decentralization only comes from very strict - and sometimes arbitrary - rules (why this particular 50/25/12.5 payout scheme, why ECDSA, why only those opcodes in scripts, ...) that were set right from the start and agreed upon by everyone who ever used the system. Were those rules "central planning" too?
123  Bitcoin / Bitcoin Discussion / Re: Bitcoin-Qt / bitcoind version 0.8.0 release candidate 1 on: February 16, 2013, 12:32:17 PM
i actually had to do this reindex process about half a dozen times... but i'd stop it every so often to make a backup & it finally finished.

it seemed to have some random chance to get into a loop where it gets orphan blocks over and over

If you see the reindexing finding orphans over and over, it usually means one block couldn't be found, but laters blocks can be. Since one block in their ancestry can't be connected, it is reported as an orphan and skipped. It's not an infinite loop though, it will just go over all block files. We've had some bugs in pre-releases that could cause this, but for someone first trying 0.8.0rc1, my guess is that you had a few blocks on disk that actually were corrupted.

How did you fix it? If my assumption is correct, the only solution would be waiting a few minutes for it to skip all blocks you already have, and then waiting for it to start downloading from before the invalid block.
124  Bitcoin / Bitcoin Discussion / Re: Bitcoin-Qt / bitcoind version 0.8.0 release candidate 1 on: February 16, 2013, 12:21:11 PM
Quick question: Why no more detach database on shutdown?

Because the block database(s) are LevelDB now. The BDB databases use a "database environment" (a directory shared by all databases) for journalling and consistency checks. To support moving the database files around, the files were "detached" from the environment at shutdown. Originally, this was always done. Since 0.6.1 this was made optional for the block chain databases (wallets were still always detached at shutdown), and -detachdb was introduced to re-enable it.

LevelDB databases use an entire directory by themselves, and are independent. So there is no environment to detach from anymore. The wallet in 0.8 is still BDB, and is still always detached.

Quote
Is the old database removed or can I delete something to save space?

No, it's not. But if you don't intend to go back to 0.7.x or earlier, you can delete blkindex.dat, blk0001.dat and blk0002.dat from the datadir (don't delete anything inside the blocks/ subdirectory though).
125  Bitcoin / Bitcoin Discussion / Re: Parts of the code related to the 21 million limit on: February 15, 2013, 09:18:27 PM
Here, in the v0.7.2 source code, is the function that calculates the reward per block at a given height. The 21M limit comes from adding all the resulting rewards.
126  Bitcoin / Development & Technical Discussion / Re: Too many disk writes during blockchain sync on: February 12, 2013, 08:22:22 PM
Please try v0.8.0rc1.
127  Bitcoin / Bitcoin Technical Support / Re: bitcoin client authentication on: February 11, 2013, 09:44:03 PM
What would be really good, would be a client feature that it has to handshake with the mining network, and some tiny fingerprint appear in the block-chain that can then be viewed from a trusted site.  You then know that your client is a real one.

You want the client to validate itself, and tell you has verified it is authentic?

Do you think that someone who distributes a malicious version won't just make it skip that check?
128  Bitcoin / Bitcoin Technical Support / Re: bitcoin-qt: Move Downloaded blockchain to another installation on: February 11, 2013, 09:42:01 PM
v0.8.0 is not released yet, there's only a release candidate.
129  Bitcoin / Bitcoin Discussion / Re: Bitcoin-Qt / bitcoind version 0.8.0 release candidate 1 on: February 11, 2013, 08:32:59 PM
So with pruning, the whole block[chain] is still downloaded and served. Only a part of it is then used for the "user-side" of bitcoin, where it cares about what your addresses are and that?

Not really. None of this has anything to do with the wallet implementation. Wallets always track all transactions relevant to the user. This has not (or hardly) changed between 0.7 and 0.8.

What has changed, is how the block and transaction validation works. Previously, we stored
  • the full blocks (blk000?.dat)
  • the (byte) position of every block and transaction in it (blkindex.dat)
  • for every transaction output, whether and where is was spent (also blkindex.dat)
This required an ever-growing index, and fast access to the full (ever growing) block data. This was slow.

The new system stores:
  • the full blocks (blocks/blk000??.dat)
  • the (byte) position of every block in it (but not every transaction!) (blocks/index/*)
  • a separate database with a copy of the unspent transaction outputs (so not an index with byte positions for into the block chain, just a copy of not even the full transactions, but only the part that may be relevant in the future) (chainstate/*)
  • an undo log for the chain state, so we can go back in time for reorganisations (blocks/rev000??.dat).
The big advantage is that we now only need fast access to the chain state (around 150 MB), instead of to the full blocks and the full index (several GB).

Although I initially called this new database/validation system "ultraprune". This is a very confusing name, as there is no actual pruning going on: we still keep all blocks/transactions around. The block data is still necessary for rescanning, reorganising, serving to other nodes, and the getrawtransaction RPC call. The code that resulted in the new database system actually started as an experiment about how small the chain state (aka UTXO set) could be represented, by pruning it as hard as possible. This is where the name comes from.
130  Bitcoin / Bitcoin Discussion / Re: Bitcoin-Qt / bitcoind version 0.8.0 release candidate 1 on: February 10, 2013, 12:49:32 AM
Quick question regarding this. Does importprivkey still work? It rescans the block chain to see if it has been used, would this still work with the way it stores the historical transaction ids? Or would I need to use the txindex=1 if I plan on importing keys?

No problem - rescan just goes through all old blocks again, one by one. Those are still available in 0.8. The only thing that changes is that no information in the index is kept about spent transactions, so without txindex, it's impossible to find a transaction given just its txid. In practice, really the only thing affected is the getrawtransaction RPC.
131  Bitcoin / Bitcoin Discussion / Re: Bitcoin-Qt / bitcoind version 0.8.0 release candidate 1 on: February 10, 2013, 12:32:39 AM
i'm just curious, are there plans to speed up the data transfer rates? on weaker connections like mine, it will still take a long time to sync. this applies even more to less well connected 3rd world countries!

There are known problems with the current  block synchronisation mechanism. Not exactly a bug, but just a crappy implementation. It's certainly intended to be changed, but it's not a trivial thing to do. Using the bootstrap.dat torrent, or using -connect=<ip> to a known fast node (not necessarily a trusted one) can speed up the initial download a lot. Not really nice solutions, but for now, it's all we have.

Quote
looking into stream compression, or even other ways to compress based on references in the existing database ... and bundling up even more blocks in larger transfers is probably the only way to go?

Block compression is certainly possible, but it's hard to get significant improvements. With a lot of effort and a very specifically designed compressed format, I expect we may get to something like 40%. Unfortunately, the blocks contain lots of hashes and cryptographic signatures, which are essentially uncompressible. Right now, I don't think these are a priority. Fixing the download mechanism will have a much larger impact.

1. Can we now delete the legacy blk000x.dat behemoth files?

Yes, you can if you don't intend to downgrade to 0.7.x (or below) code anymore. Note that the old and the new block files are hardlinked on supported platforms/filesystems, so you won't actually gain any space by deleting them. You can however delete blkindex.dat.

What's the disk space requirement of LevelDB vs the old BDB? (In percent.)

There's no simple answer to this, as it's not just a change from BDB to LevelDB: the actual layout and organisation of the databases changed as well. The actual blocks haven't changed (they are in different, more and smaller files, but the data is the same). The old blkindex.dat (BDB, ~1650 MiB) was replaced by:
  • the block index (LevelDB, $DATADIR/blocks/index, ~ 30 MiB)
  • the chain state (LevelDB, $DATADIR/chainstate, ~ 170 MiB)
  • the undo data (custom format, $DATADIR/blocks/rev*.dat, ~ 700 MiB)
So, roughly, the old index data now takes half as much space, but it's not entirely a fair comparison (we actually do store less information now).
132  Bitcoin / Development & Technical Discussion / Re: what is -checkblocks for, and why does it default so high? on: February 06, 2013, 03:25:26 PM
Does this feature know about blockchain reorgs?

Yes, it does. It uses the same mechanism the block synchronization mechanism uses to find forks: the wallet stores a series of hashes: the last one, the 9 direct ancestors of the last one, and then the hashes of blocks with exponentially increasing distance back.

133  Bitcoin / Development & Technical Discussion / Re: what is -checkblocks for, and why does it default so high? on: February 05, 2013, 09:32:55 AM
if your sure your blockchain wont get corrupted (RAID with brtfs for example) u can sest it to 0 without any problems.

-checkblocks=0 actually means "check all blocks" (because of compatibility with the pre-0.6 -checkblocks (without parameter) that checked everything instead of just the last 2500 blocks).
134  Bitcoin / Development & Technical Discussion / Re: what is -checkblocks for, and why does it default so high? on: February 05, 2013, 09:05:16 AM
I notice in the current git version of bitcoin-qt it sets -checkblocks to 2500 by default, which results in:

  "Verifying last 2500 blocks at level 1"

each time I start bitcoin-qt.

What is that for?  Why do blocks need verifying?  Nothing writes blocks to disk except bitcoin-qt, so what's this extra verification step for?  Is it perhaps like a 'rescan', looking for new transactions which may not be in my wallet yet?

-rescan is a wallet option, which rescans the block chain for transactions missing from the wallet. It shouldn't be necessary in normal operation since 0.3.21 (since then, wallet auto rescan from the last block they 'saw').

The primary reason for -checkblocks (a feature that has existed for as long as I remember, but has only been made configurable in 0.6) is preventing accidental disk corruption of the block chain file, which could result in rejecting the best chain. It is certainly not a protection against an attacker that is able to write to your block files (though it does make such an attack harder).

Also note that both the meaning of the check levels as the default number of blocks to scan has been changed in git head (only 288 blocks, but with a far more thorough check). This is safe, as block data corruption is no longer a chain forking risk in the new database layout (only the unspent transaction output database, aka coins database, aka chainstate matters).

Here's an explanation of the new -checklevel meanings (for 0.8 ):
  • 0: Validate all block headers + compare (by hash) to blocks on disk for the last -checkblocks blocks
  • 1: In addition, verify (standalong) validity of those -checkblocks blocks
  • 2: In addition, verify that undo data matches checksums
  • 3: In addition, check that the current chainstate matches can reasonably be the result of the last N blocks, where N is limited by both -checkblocks and the amount of in-memory cache (-dbcache); typically it's around 150.
  • 4: In addition, for the last N blocks (see above), do full validation (including signature checks).
Note that the new default (3) is in fact a stronger check than the previous highest level (6).
135  Bitcoin / Development & Technical Discussion / Re: Add a "low priority" setting for Windoze? on: January 22, 2013, 07:45:00 PM
BTW: I'm not saying that a checkbox to lower CPU priority (or even a default) is a bad idea - I just don't think it will help preventing freezes.

As I already stated I have tested this on my laptop many times I am certain that it must be CPU and not I/O (it does not *freeze* when the task has been set to low priority).

Interesting - thanks for reporting, in that case.
136  Bitcoin / Development & Technical Discussion / Re: Add a "low priority" setting for Windoze? on: January 21, 2013, 10:36:52 PM
I'm quite sure your computer freezing up is not caused by the CPU needed for verifying signatures, but by the database I/O blocking your OS and other applications from accessing your hard disk. Scheduling the task at low priority will not help in that case.

The next release (0.8 ) should decrease I/O requirements a lot. If you're interested, feel free to try the test builds

BTW: I'm not saying that a checkbox to lower CPU priority (or even a default) is a bad idea - I just don't think it will help preventing freezes.
137  Bitcoin / Development & Technical Discussion / Re: Is the Qt client killing the hard drive slowly? on: January 17, 2013, 10:35:15 AM
Current versions of the reference client indeed causes excessive disk I/O. The next version (0.8, not yet released) should improve upon this significantly (it uses a more modern database engine, and a new database layout).
138  Bitcoin / Development & Technical Discussion / Re: Experimental pre-0.8 builds for testing on: January 16, 2013, 06:00:57 PM
Are you joking? Your whole attitude and the reality (0.7.99 is experimental!) press new people to use bootstrap.dat and this will increase as longer you wait.
And if a newbie complains to me about long first initialization, of course I will point them to bootstrap.dat method, who won´t?

Of course. I'll point them to bootstrap.dat now too. But that doesn't mean we don't have to fix the real problem (the fact that the built-in block download code is very crappy). If things were working as they should, we wouldn't need to tell new users about bootstrap.dat (although it may still be useful).

My attitude? I don't want to be responsible for a network fork. The purpose of the reference client is implementing a full node, and that means zero trust: it will verify all incoming data. If you can't live with the resource requirements that are needed for that, and consider trusting some instance to do checking for you, you're better off using a lightweight client (which only checks block headers - see SPV mode in Satoshi's paper), as those don't need trust in the data source and don't risk creating a network fork if the trusted data is wrong.

I think everything that had to be said, is said now. You may disagree with me, but if so, please take it elsewhere. I won't respond anymore to messages here which are not about getting 0.8 tested.
139  Bitcoin / Development & Technical Discussion / Re: Experimental pre-0.8 builds for testing on: January 16, 2013, 04:58:59 PM
Sorry, I get tired, I just wanted to reply with  the 4th, last part, but it looks superflously. Therefore only one obviously point: even if you don't know what "cmp" do, and what you can conclude from my cited, given output, the sum of bytes in all these blocks/blk*.dat files is larger than the original bootstrap.dat. So you tell us false.
(and because all except the last are 2^27 bytes in size) you even need not to add up, only consider size modulo 2^27 of bootstarp.dat for a check).

You make the wrong assumption that because the file format is the same, the concatenation will be binary identical. In the 0.8 code, the files are pre-allocated in blocks of 16 MiB, to avoid high fragmentation. That means that at the end of those files, you'll have some zero bytes. So the concatenated files will have these zeroes too, and you'll get a slightly larger bootstrap.dat. Both the loading code in 0.7 and 0.8 should be able to deal with holes in the files, though.

See, I didn't lie.
140  Bitcoin / Development & Technical Discussion / Re: Experimental pre-0.8 builds for testing on: January 16, 2013, 04:56:17 PM
"completeness of the verification" let the user decide what degree he wants. There is an option -checklevel (introduced by yourself, add e.g. a 7 for signature-check if you like, or whatever you like personally to fine-tune control the verification process when block loading) -- I tried do do all these with my blockparser I know bitcoin-0.7.99 does when bootstraping data (to get a fair real-time comparison).
But initialization with the bootstrap.dat which you got from a trusted source, should need no (deep?) verification to judify long real-times!

-checkblocks is about how much verification you want at startup about the data you already have. I find disabling checks when importing a bootstrap.dat very risky - someone distributing "trusted" files which have an accidental (or deliberate!) error in them is enough to cause a block chain split, which in case of widespread deployment could be devastating for the bitcoin economy. If there was something significant to be gained, it could be made optional with a flag with a big warning on that, but I doubt it will help much (I don't have numbers, and you're welcome to do benchmarks yourself if you want to help), as most time is maintaining the database and not doing checks.

Quote
"As said, not a priority now" -- well I present you quantitative real-time measurement and you judge this is of no importance.
"Yes if you do everything in RAM using a hash map, I'm sure it will be faster (it should!)." Sorry, your 0.7.99 with bootstrap.dat also (could/should?) do everything in RAM, and I wrote already: the write back of the new data-structure in 0.7.99 happens in parallel to the CPU on my system, so no (a very, very negligible) real-time win possible, if no writing back to disk occurs. Your qualitative comment "it will be faster (it should!)", I already estimated quantitatively to be > 10 to give the true magnitudes. :-(

Yes, thank you very much for those numbers. I know there are performance improvements possible, I've told you a zillion times now. This thread is about doing tests, so we can get 0.8 out as soon as possible, because 0.7.x's performance is terrible, and it won't be released before it is stable.

Can we please stop this discussion, and get this thread back to test results?
Pages: « 1 2 3 4 5 6 [7] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!